Test Report: Docker_Linux_crio 22414

                    
                      7225a17c4161ad48c671012cf8528dba752659f9:2026-01-10:43179
                    
                

Test fail (26/332)

x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable volcano --alsologtostderr -v=1: exit status 11 (240.378023ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:54:55.042911   23440 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:54:55.043047   23440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:54:55.043058   23440 out.go:374] Setting ErrFile to fd 2...
	I0110 01:54:55.043062   23440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:54:55.043261   23440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:54:55.043496   23440 mustload.go:66] Loading cluster: addons-600454
	I0110 01:54:55.043800   23440 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:54:55.043814   23440 addons.go:622] checking whether the cluster is paused
	I0110 01:54:55.043909   23440 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:54:55.043922   23440 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:55.044340   23440 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:55.061934   23440 ssh_runner.go:195] Run: systemctl --version
	I0110 01:54:55.061973   23440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:55.079871   23440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:55.170425   23440 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:54:55.170509   23440 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:54:55.199444   23440 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:54:55.199463   23440 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:54:55.199468   23440 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:54:55.199473   23440 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:54:55.199477   23440 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:54:55.199481   23440 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:54:55.199486   23440 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:54:55.199490   23440 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:54:55.199494   23440 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:54:55.199502   23440 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:54:55.199507   23440 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:54:55.199513   23440 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:54:55.199521   23440 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:54:55.199526   23440 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:54:55.199531   23440 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:54:55.199546   23440 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:54:55.199551   23440 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:54:55.199557   23440 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:54:55.199561   23440 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:54:55.199567   23440 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:54:55.199578   23440 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:54:55.199588   23440 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:54:55.199593   23440 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:54:55.199598   23440 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:54:55.199607   23440 cri.go:96] found id: ""
	I0110 01:54:55.199655   23440 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:54:55.213619   23440 out.go:203] 
	W0110 01:54:55.214746   23440 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:54:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:54:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:54:55.214766   23440 out.go:285] * 
	* 
	W0110 01:54:55.215454   23440 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:54:55.216526   23440 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 2.821479ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-mlf8m" [94bf43da-60ab-405e-8e3c-ba8318d37ad2] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.001933912s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-zx8d4" [591ae63a-173e-4fb4-89b9-9fd8522cb1c1] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003524652s
addons_test.go:394: (dbg) Run:  kubectl --context addons-600454 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-600454 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-600454 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.340027831s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable registry --alsologtostderr -v=1: exit status 11 (252.559724ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:17.679100   26021 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:17.679459   26021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:17.679469   26021 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:17.679475   26021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:17.679755   26021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:17.680338   26021 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:17.680924   26021 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:17.680977   26021 addons.go:622] checking whether the cluster is paused
	I0110 01:55:17.681143   26021 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:17.681179   26021 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:17.681605   26021 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:17.702797   26021 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:17.702861   26021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:17.721479   26021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:17.812628   26021 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:17.812734   26021 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:17.842559   26021 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:17.842583   26021 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:17.842590   26021 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:17.842595   26021 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:17.842600   26021 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:17.842607   26021 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:17.842612   26021 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:17.842617   26021 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:17.842621   26021 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:17.842628   26021 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:17.842640   26021 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:17.842659   26021 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:17.842668   26021 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:17.842677   26021 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:17.842689   26021 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:17.842700   26021 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:17.842705   26021 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:17.842710   26021 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:17.842715   26021 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:17.842722   26021 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:17.842728   26021 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:17.842735   26021 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:17.842738   26021 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:17.842743   26021 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:17.842746   26021 cri.go:96] found id: ""
	I0110 01:55:17.842791   26021 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:17.855952   26021 out.go:203] 
	W0110 01:55:17.857079   26021 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:17.857096   26021 out.go:285] * 
	* 
	W0110 01:55:17.857748   26021 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:17.858807   26021 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.83s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.025138ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-600454
addons_test.go:334: (dbg) Run:  kubectl --context addons-600454 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (244.500633ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:18.084406   26136 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:18.084549   26136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:18.084560   26136 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:18.084565   26136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:18.084734   26136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:18.085026   26136 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:18.085376   26136 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:18.085392   26136 addons.go:622] checking whether the cluster is paused
	I0110 01:55:18.085476   26136 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:18.085490   26136 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:18.085846   26136 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:18.103972   26136 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:18.104031   26136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:18.121420   26136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:18.213494   26136 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:18.213559   26136 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:18.245858   26136 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:18.245880   26136 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:18.245898   26136 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:18.245903   26136 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:18.245908   26136 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:18.245913   26136 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:18.245917   26136 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:18.245922   26136 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:18.245927   26136 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:18.245934   26136 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:18.245942   26136 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:18.245946   26136 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:18.245951   26136 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:18.245958   26136 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:18.245968   26136 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:18.245975   26136 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:18.245980   26136 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:18.245986   26136 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:18.245990   26136 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:18.245995   26136 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:18.246004   26136 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:18.246009   26136 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:18.246014   26136 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:18.246019   26136 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:18.246023   26136 cri.go:96] found id: ""
	I0110 01:55:18.246081   26136 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:18.261457   26136 out.go:203] 
	W0110 01:55:18.262694   26136 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:18.262709   26136 out.go:285] * 
	* 
	W0110 01:55:18.263710   26136 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:18.264993   26136 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-600454 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-600454 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-600454 replace --force -f testdata/nginx-pod-svc.yaml
2026/01/10 01:55:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [2f7d6515-bc61-49cc-830f-e79cc5696781] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [2f7d6515-bc61-49cc-830f-e79cc5696781] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002633543s
I0110 01:55:26.703754   14086 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-600454 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (264.582382ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:27.491971   27275 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:27.492088   27275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:27.492096   27275 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:27.492100   27275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:27.492311   27275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:27.492562   27275 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:27.492871   27275 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:27.492896   27275 addons.go:622] checking whether the cluster is paused
	I0110 01:55:27.492976   27275 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:27.492987   27275 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:27.493497   27275 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:27.516370   27275 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:27.516428   27275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:27.539731   27275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:27.636197   27275 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:27.636261   27275 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:27.667849   27275 cri.go:96] found id: "47e9e72643236e07828482d86c89624cfa4fe5e3784517703cd104f9cce77eba"
	I0110 01:55:27.667869   27275 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:27.667873   27275 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:27.667877   27275 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:27.667880   27275 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:27.667897   27275 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:27.667903   27275 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:27.667907   27275 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:27.667912   27275 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:27.667936   27275 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:27.667951   27275 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:27.667959   27275 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:27.667964   27275 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:27.667970   27275 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:27.667972   27275 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:27.667977   27275 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:27.667980   27275 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:27.667984   27275 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:27.667986   27275 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:27.667989   27275 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:27.667994   27275 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:27.667999   27275 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:27.668002   27275 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:27.668004   27275 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:27.668009   27275 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:27.668013   27275 cri.go:96] found id: ""
	I0110 01:55:27.668059   27275 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:27.689194   27275 out.go:203] 
	W0110 01:55:27.690593   27275 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:27.690752   27275 out.go:285] * 
	* 
	W0110 01:55:27.692542   27275 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:27.693680   27275 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable ingress --alsologtostderr -v=1: exit status 11 (243.298242ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:27.757722   27627 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:27.758089   27627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:27.758102   27627 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:27.758107   27627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:27.758376   27627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:27.758751   27627 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:27.759273   27627 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:27.759296   27627 addons.go:622] checking whether the cluster is paused
	I0110 01:55:27.759432   27627 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:27.759448   27627 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:27.760000   27627 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:27.779515   27627 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:27.779575   27627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:27.798178   27627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:27.890290   27627 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:27.890382   27627 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:27.919942   27627 cri.go:96] found id: "47e9e72643236e07828482d86c89624cfa4fe5e3784517703cd104f9cce77eba"
	I0110 01:55:27.919971   27627 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:27.919978   27627 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:27.919985   27627 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:27.919990   27627 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:27.919996   27627 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:27.920003   27627 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:27.920009   27627 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:27.920015   27627 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:27.920024   27627 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:27.920030   27627 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:27.920036   27627 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:27.920043   27627 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:27.920054   27627 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:27.920060   27627 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:27.920080   27627 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:27.920090   27627 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:27.920102   27627 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:27.920112   27627 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:27.920118   27627 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:27.920127   27627 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:27.920137   27627 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:27.920146   27627 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:27.920151   27627 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:27.920159   27627 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:27.920164   27627 cri.go:96] found id: ""
	I0110 01:55:27.920219   27627 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:27.933882   27627 out.go:203] 
	W0110 01:55:27.935162   27627 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:27.935190   27627 out.go:285] * 
	* 
	W0110 01:55:27.936155   27627 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:27.937274   27627 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (10.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-g22r7" [9b644588-c0f0-4c76-ad0f-a62aaa94a67b] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002809271s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (229.992904ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:23.490207   26871 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:23.490333   26871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:23.490343   26871 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:23.490347   26871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:23.490506   26871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:23.490738   26871 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:23.491046   26871 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:23.491059   26871 addons.go:622] checking whether the cluster is paused
	I0110 01:55:23.491138   26871 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:23.491149   26871 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:23.491522   26871 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:23.509987   26871 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:23.510028   26871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:23.526975   26871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:23.616675   26871 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:23.616771   26871 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:23.646198   26871 cri.go:96] found id: "47e9e72643236e07828482d86c89624cfa4fe5e3784517703cd104f9cce77eba"
	I0110 01:55:23.646231   26871 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:23.646237   26871 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:23.646242   26871 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:23.646246   26871 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:23.646252   26871 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:23.646256   26871 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:23.646259   26871 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:23.646263   26871 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:23.646276   26871 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:23.646281   26871 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:23.646285   26871 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:23.646290   26871 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:23.646305   26871 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:23.646313   26871 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:23.646324   26871 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:23.646328   26871 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:23.646334   26871 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:23.646338   26871 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:23.646342   26871 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:23.646353   26871 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:23.646361   26871 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:23.646367   26871 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:23.646375   26871 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:23.646379   26871 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:23.646383   26871 cri.go:96] found id: ""
	I0110 01:55:23.646438   26871 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:23.660253   26871 out.go:203] 
	W0110 01:55:23.661533   26871 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:23.661550   26871 out.go:285] * 
	* 
	W0110 01:55:23.662264   26871 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:23.663467   26871 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.831843ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-pj8xt" [81ec1b8b-b693-480e-b5c6-1d50cb816a02] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002519253s
addons_test.go:465: (dbg) Run:  kubectl --context addons-600454 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (238.526079ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:14.417431   24270 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:14.417716   24270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:14.417726   24270 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:14.417730   24270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:14.417980   24270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:14.418285   24270 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:14.418631   24270 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:14.418646   24270 addons.go:622] checking whether the cluster is paused
	I0110 01:55:14.418740   24270 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:14.418754   24270 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:14.419155   24270 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:14.436792   24270 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:14.436858   24270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:14.455595   24270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:14.548473   24270 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:14.548542   24270 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:14.577605   24270 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:14.577630   24270 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:14.577635   24270 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:14.577638   24270 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:14.577641   24270 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:14.577645   24270 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:14.577654   24270 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:14.577657   24270 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:14.577659   24270 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:14.577668   24270 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:14.577671   24270 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:14.577674   24270 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:14.577677   24270 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:14.577679   24270 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:14.577682   24270 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:14.577694   24270 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:14.577699   24270 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:14.577704   24270 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:14.577706   24270 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:14.577709   24270 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:14.577711   24270 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:14.577714   24270 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:14.577717   24270 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:14.577719   24270 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:14.577722   24270 cri.go:96] found id: ""
	I0110 01:55:14.577760   24270 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:14.595036   24270 out.go:203] 
	W0110 01:55:14.596272   24270 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:14.596292   24270 out.go:285] * 
	* 
	W0110 01:55:14.596962   24270 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:14.598170   24270 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0110 01:55:15.563100   14086 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0110 01:55:15.566880   14086 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0110 01:55:15.566923   14086 kapi.go:107] duration metric: took 3.835581ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.8471ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-600454 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-600454 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [42b54195-7eaf-4e35-8ae5-5fd92eb33f97] Pending
helpers_test.go:353: "task-pv-pod" [42b54195-7eaf-4e35-8ae5-5fd92eb33f97] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [42b54195-7eaf-4e35-8ae5-5fd92eb33f97] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003357749s
addons_test.go:574: (dbg) Run:  kubectl --context addons-600454 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-600454 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-600454 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-600454 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-600454 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-600454 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-600454 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [3a34a7c6-85dc-4236-9303-afce147c4f9f] Pending
helpers_test.go:353: "task-pv-pod-restore" [3a34a7c6-85dc-4236-9303-afce147c4f9f] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003099913s
addons_test.go:616: (dbg) Run:  kubectl --context addons-600454 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-600454 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-600454 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (227.928557ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:48.889826   28190 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:48.890173   28190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:48.890187   28190 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:48.890194   28190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:48.890411   28190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:48.890757   28190 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:48.891098   28190 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:48.891116   28190 addons.go:622] checking whether the cluster is paused
	I0110 01:55:48.891216   28190 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:48.891232   28190 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:48.891625   28190 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:48.908718   28190 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:48.908790   28190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:48.924673   28190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:49.015224   28190 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:49.015316   28190 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:49.043717   28190 cri.go:96] found id: "47e9e72643236e07828482d86c89624cfa4fe5e3784517703cd104f9cce77eba"
	I0110 01:55:49.043740   28190 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:49.043745   28190 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:49.043749   28190 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:49.043752   28190 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:49.043755   28190 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:49.043758   28190 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:49.043760   28190 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:49.043763   28190 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:49.043768   28190 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:49.043773   28190 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:49.043779   28190 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:49.043784   28190 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:49.043789   28190 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:49.043799   28190 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:49.043815   28190 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:49.043819   28190 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:49.043825   28190 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:49.043831   28190 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:49.043834   28190 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:49.043840   28190 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:49.043843   28190 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:49.043850   28190 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:49.043852   28190 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:49.043855   28190 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:49.043858   28190 cri.go:96] found id: ""
	I0110 01:55:49.043919   28190 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:49.057531   28190 out.go:203] 
	W0110 01:55:49.058726   28190 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:49.058752   28190 out.go:285] * 
	* 
	W0110 01:55:49.059463   28190 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:49.060587   28190 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (226.108784ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:49.117924   28252 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:49.118186   28252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:49.118196   28252 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:49.118203   28252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:49.118422   28252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:49.118679   28252 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:49.119009   28252 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:49.119023   28252 addons.go:622] checking whether the cluster is paused
	I0110 01:55:49.119100   28252 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:49.119111   28252 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:49.119468   28252 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:49.136144   28252 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:49.136200   28252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:49.151711   28252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:49.242106   28252 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:49.242192   28252 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:49.270515   28252 cri.go:96] found id: "47e9e72643236e07828482d86c89624cfa4fe5e3784517703cd104f9cce77eba"
	I0110 01:55:49.270541   28252 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:49.270545   28252 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:49.270548   28252 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:49.270551   28252 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:49.270555   28252 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:49.270558   28252 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:49.270561   28252 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:49.270563   28252 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:49.270572   28252 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:49.270575   28252 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:49.270578   28252 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:49.270581   28252 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:49.270584   28252 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:49.270588   28252 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:49.270600   28252 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:49.270604   28252 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:49.270608   28252 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:49.270620   28252 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:49.270623   28252 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:49.270626   28252 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:49.270629   28252 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:49.270632   28252 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:49.270635   28252 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:49.270638   28252 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:49.270640   28252 cri.go:96] found id: ""
	I0110 01:55:49.270685   28252 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:49.283609   28252 out.go:203] 
	W0110 01:55:49.284810   28252 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:49.284826   28252 out.go:285] * 
	* 
	W0110 01:55:49.285549   28252 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:49.286757   28252 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (33.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-600454 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-600454 --alsologtostderr -v=1: exit status 11 (250.488426ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:14.664694   24362 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:14.664846   24362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:14.664857   24362 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:14.664863   24362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:14.665080   24362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:14.665342   24362 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:14.665690   24362 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:14.665718   24362 addons.go:622] checking whether the cluster is paused
	I0110 01:55:14.665826   24362 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:14.665842   24362 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:14.666284   24362 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:14.685677   24362 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:14.685755   24362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:14.706580   24362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:14.799510   24362 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:14.799602   24362 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:14.832904   24362 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:14.832930   24362 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:14.832935   24362 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:14.832939   24362 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:14.832942   24362 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:14.832946   24362 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:14.832949   24362 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:14.832951   24362 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:14.832954   24362 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:14.832959   24362 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:14.832962   24362 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:14.832966   24362 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:14.832968   24362 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:14.832971   24362 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:14.832974   24362 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:14.832983   24362 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:14.832987   24362 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:14.832993   24362 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:14.832995   24362 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:14.832998   24362 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:14.833005   24362 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:14.833008   24362 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:14.833011   24362 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:14.833013   24362 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:14.833016   24362 cri.go:96] found id: ""
	I0110 01:55:14.833057   24362 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:14.846637   24362 out.go:203] 
	W0110 01:55:14.847940   24362 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:14.847958   24362 out.go:285] * 
	* 
	W0110 01:55:14.848645   24362 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:14.849776   24362 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-600454 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-600454
helpers_test.go:244: (dbg) docker inspect addons-600454:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cfeb52efe9cbf7f8e86714da6de2f6f2ba4323d4d1b5691e1a6c53c2d1a45086",
	        "Created": "2026-01-10T01:53:47.408337655Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16088,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T01:53:47.438061057Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/cfeb52efe9cbf7f8e86714da6de2f6f2ba4323d4d1b5691e1a6c53c2d1a45086/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cfeb52efe9cbf7f8e86714da6de2f6f2ba4323d4d1b5691e1a6c53c2d1a45086/hostname",
	        "HostsPath": "/var/lib/docker/containers/cfeb52efe9cbf7f8e86714da6de2f6f2ba4323d4d1b5691e1a6c53c2d1a45086/hosts",
	        "LogPath": "/var/lib/docker/containers/cfeb52efe9cbf7f8e86714da6de2f6f2ba4323d4d1b5691e1a6c53c2d1a45086/cfeb52efe9cbf7f8e86714da6de2f6f2ba4323d4d1b5691e1a6c53c2d1a45086-json.log",
	        "Name": "/addons-600454",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-600454:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-600454",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cfeb52efe9cbf7f8e86714da6de2f6f2ba4323d4d1b5691e1a6c53c2d1a45086",
	                "LowerDir": "/var/lib/docker/overlay2/f1fe71d159a2d7d5c3c063ab4b18e09defaf47de2e07226418393dd0ee0c1683-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1fe71d159a2d7d5c3c063ab4b18e09defaf47de2e07226418393dd0ee0c1683/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1fe71d159a2d7d5c3c063ab4b18e09defaf47de2e07226418393dd0ee0c1683/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1fe71d159a2d7d5c3c063ab4b18e09defaf47de2e07226418393dd0ee0c1683/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-600454",
	                "Source": "/var/lib/docker/volumes/addons-600454/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-600454",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-600454",
	                "name.minikube.sigs.k8s.io": "addons-600454",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bb94c5490b96544d87bea0335428440f80b0e3e0c7ada607fa1d7fdddfe2b6fc",
	            "SandboxKey": "/var/run/docker/netns/bb94c5490b96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-600454": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "800c30fadc211ff07412bcb77b95f6107baf1c172902ecdbd9df3488d2838ce8",
	                    "EndpointID": "2ed30e2dc6531c0a06243240e72ad79ef8f61d792631adbf33599c64513ea2ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "12:5f:5f:02:69:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-600454",
	                        "cfeb52efe9cb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-600454 -n addons-600454
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-600454 logs -n 25: (1.147689283s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-113425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-113425   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-113425                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-113425   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ -o=json --download-only -p download-only-756817 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-756817   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-756817                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-756817   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-113425                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-113425   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-756817                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-756817   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ --download-only -p download-docker-310469 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-310469 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ -p download-docker-310469                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-310469 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ --download-only -p binary-mirror-864544 --alsologtostderr --binary-mirror http://127.0.0.1:41199 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-864544   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ -p binary-mirror-864544                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-864544   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ addons  │ disable dashboard -p addons-600454                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ addons  │ enable dashboard -p addons-600454                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ start   │ -p addons-600454 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:54 UTC │
	│ addons  │ addons-600454 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:54 UTC │                     │
	│ addons  │ addons-600454 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:55 UTC │                     │
	│ addons  │ addons-600454 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:55 UTC │                     │
	│ addons  │ addons-600454 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:55 UTC │                     │
	│ addons  │ addons-600454 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:55 UTC │                     │
	│ addons  │ addons-600454 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:55 UTC │                     │
	│ addons  │ enable headlamp -p addons-600454 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-600454          │ jenkins │ v1.37.0 │ 10 Jan 26 01:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 01:53:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 01:53:24.140589   15429 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:53:24.140666   15429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:24.140673   15429 out.go:374] Setting ErrFile to fd 2...
	I0110 01:53:24.140677   15429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:24.140867   15429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:53:24.141358   15429 out.go:368] Setting JSON to false
	I0110 01:53:24.142103   15429 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2153,"bootTime":1768007851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 01:53:24.142146   15429 start.go:143] virtualization: kvm guest
	I0110 01:53:24.143780   15429 out.go:179] * [addons-600454] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 01:53:24.144868   15429 notify.go:221] Checking for updates...
	I0110 01:53:24.144874   15429 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 01:53:24.146072   15429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:53:24.147374   15429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 01:53:24.148546   15429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 01:53:24.149658   15429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 01:53:24.153421   15429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 01:53:24.154600   15429 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:53:24.175847   15429 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 01:53:24.175940   15429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:24.226621   15429 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2026-01-10 01:53:24.217611079 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 01:53:24.226707   15429 docker.go:319] overlay module found
	I0110 01:53:24.228295   15429 out.go:179] * Using the docker driver based on user configuration
	I0110 01:53:24.229584   15429 start.go:309] selected driver: docker
	I0110 01:53:24.229595   15429 start.go:928] validating driver "docker" against <nil>
	I0110 01:53:24.229604   15429 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 01:53:24.230097   15429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:24.281670   15429 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2026-01-10 01:53:24.272956279 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 01:53:24.281837   15429 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 01:53:24.282051   15429 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 01:53:24.283836   15429 out.go:179] * Using Docker driver with root privileges
	I0110 01:53:24.284926   15429 cni.go:84] Creating CNI manager for ""
	I0110 01:53:24.284992   15429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 01:53:24.285005   15429 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 01:53:24.285081   15429 start.go:353] cluster config:
	{Name:addons-600454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-600454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I0110 01:53:24.286514   15429 out.go:179] * Starting "addons-600454" primary control-plane node in "addons-600454" cluster
	I0110 01:53:24.287647   15429 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 01:53:24.288703   15429 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 01:53:24.289648   15429 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 01:53:24.289671   15429 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 01:53:24.289678   15429 cache.go:65] Caching tarball of preloaded images
	I0110 01:53:24.289685   15429 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 01:53:24.289747   15429 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 01:53:24.289758   15429 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 01:53:24.290114   15429 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/config.json ...
	I0110 01:53:24.290138   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/config.json: {Name:mkc30502cd2fda3284267d57efcbf79e491ec97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:24.304876   15429 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 01:53:24.304995   15429 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 01:53:24.305015   15429 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory, skipping pull
	I0110 01:53:24.305019   15429 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in cache, skipping pull
	I0110 01:53:24.305026   15429 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 as a tarball
	I0110 01:53:24.305032   15429 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 from local cache
	I0110 01:53:36.582276   15429 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 from cached tarball
	I0110 01:53:36.582336   15429 cache.go:243] Successfully downloaded all kic artifacts
	I0110 01:53:36.582386   15429 start.go:360] acquireMachinesLock for addons-600454: {Name:mk8975673d88cd3f9ed4dad64668f3822f312fbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 01:53:36.582511   15429 start.go:364] duration metric: took 103.576µs to acquireMachinesLock for "addons-600454"
	I0110 01:53:36.582539   15429 start.go:93] Provisioning new machine with config: &{Name:addons-600454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-600454 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 01:53:36.582629   15429 start.go:125] createHost starting for "" (driver="docker")
	I0110 01:53:36.584265   15429 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0110 01:53:36.584570   15429 start.go:159] libmachine.API.Create for "addons-600454" (driver="docker")
	I0110 01:53:36.584601   15429 client.go:173] LocalClient.Create starting
	I0110 01:53:36.584735   15429 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 01:53:36.714783   15429 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 01:53:36.825757   15429 cli_runner.go:164] Run: docker network inspect addons-600454 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 01:53:36.843962   15429 cli_runner.go:211] docker network inspect addons-600454 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 01:53:36.844036   15429 network_create.go:284] running [docker network inspect addons-600454] to gather additional debugging logs...
	I0110 01:53:36.844057   15429 cli_runner.go:164] Run: docker network inspect addons-600454
	W0110 01:53:36.859369   15429 cli_runner.go:211] docker network inspect addons-600454 returned with exit code 1
	I0110 01:53:36.859401   15429 network_create.go:287] error running [docker network inspect addons-600454]: docker network inspect addons-600454: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-600454 not found
	I0110 01:53:36.859428   15429 network_create.go:289] output of [docker network inspect addons-600454]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-600454 not found
	
	** /stderr **
	I0110 01:53:36.859516   15429 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 01:53:36.876534   15429 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016de9b0}
	I0110 01:53:36.876564   15429 network_create.go:124] attempt to create docker network addons-600454 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0110 01:53:36.876607   15429 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-600454 addons-600454
	I0110 01:53:36.921437   15429 network_create.go:108] docker network addons-600454 192.168.49.0/24 created
	I0110 01:53:36.921473   15429 kic.go:121] calculated static IP "192.168.49.2" for the "addons-600454" container
	I0110 01:53:36.921544   15429 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 01:53:36.936853   15429 cli_runner.go:164] Run: docker volume create addons-600454 --label name.minikube.sigs.k8s.io=addons-600454 --label created_by.minikube.sigs.k8s.io=true
	I0110 01:53:36.953429   15429 oci.go:103] Successfully created a docker volume addons-600454
	I0110 01:53:36.953499   15429 cli_runner.go:164] Run: docker run --rm --name addons-600454-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-600454 --entrypoint /usr/bin/test -v addons-600454:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 01:53:43.626138   15429 cli_runner.go:217] Completed: docker run --rm --name addons-600454-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-600454 --entrypoint /usr/bin/test -v addons-600454:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib: (6.672600372s)
	I0110 01:53:43.626167   15429 oci.go:107] Successfully prepared a docker volume addons-600454
	I0110 01:53:43.626211   15429 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 01:53:43.626228   15429 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 01:53:43.626295   15429 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-600454:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 01:53:47.340418   15429 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-600454:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.714067234s)
	I0110 01:53:47.340459   15429 kic.go:203] duration metric: took 3.714229931s to extract preloaded images to volume ...
	W0110 01:53:47.340582   15429 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 01:53:47.340649   15429 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 01:53:47.340687   15429 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 01:53:47.393362   15429 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-600454 --name addons-600454 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-600454 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-600454 --network addons-600454 --ip 192.168.49.2 --volume addons-600454:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 01:53:47.672476   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Running}}
	I0110 01:53:47.691479   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:53:47.709237   15429 cli_runner.go:164] Run: docker exec addons-600454 stat /var/lib/dpkg/alternatives/iptables
	I0110 01:53:47.756575   15429 oci.go:144] the created container "addons-600454" has a running status.
	I0110 01:53:47.756600   15429 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa...
	I0110 01:53:47.829196   15429 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 01:53:47.856115   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:53:47.873095   15429 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 01:53:47.873122   15429 kic_runner.go:114] Args: [docker exec --privileged addons-600454 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 01:53:47.932596   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:53:47.957164   15429 machine.go:94] provisionDockerMachine start ...
	I0110 01:53:47.957261   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:47.981160   15429 main.go:144] libmachine: Using SSH client type: native
	I0110 01:53:47.981479   15429 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 01:53:47.981495   15429 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 01:53:48.114805   15429 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-600454
	
	I0110 01:53:48.114833   15429 ubuntu.go:182] provisioning hostname "addons-600454"
	I0110 01:53:48.114902   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:48.133488   15429 main.go:144] libmachine: Using SSH client type: native
	I0110 01:53:48.133866   15429 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 01:53:48.133900   15429 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-600454 && echo "addons-600454" | sudo tee /etc/hostname
	I0110 01:53:48.271325   15429 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-600454
	
	I0110 01:53:48.271386   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:48.290644   15429 main.go:144] libmachine: Using SSH client type: native
	I0110 01:53:48.290847   15429 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 01:53:48.290864   15429 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-600454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-600454/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-600454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 01:53:48.416025   15429 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 01:53:48.416047   15429 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 01:53:48.416084   15429 ubuntu.go:190] setting up certificates
	I0110 01:53:48.416108   15429 provision.go:84] configureAuth start
	I0110 01:53:48.416157   15429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-600454
	I0110 01:53:48.432606   15429 provision.go:143] copyHostCerts
	I0110 01:53:48.432665   15429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 01:53:48.432765   15429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 01:53:48.432822   15429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 01:53:48.432868   15429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.addons-600454 san=[127.0.0.1 192.168.49.2 addons-600454 localhost minikube]
	I0110 01:53:48.512700   15429 provision.go:177] copyRemoteCerts
	I0110 01:53:48.512751   15429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 01:53:48.512788   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:48.530572   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:53:48.620983   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 01:53:48.638296   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0110 01:53:48.653725   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 01:53:48.670345   15429 provision.go:87] duration metric: took 254.216568ms to configureAuth
	I0110 01:53:48.670369   15429 ubuntu.go:206] setting minikube options for container-runtime
	I0110 01:53:48.670525   15429 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:53:48.670624   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:48.687250   15429 main.go:144] libmachine: Using SSH client type: native
	I0110 01:53:48.687464   15429 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 01:53:48.687487   15429 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 01:53:48.952917   15429 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 01:53:48.952945   15429 machine.go:97] duration metric: took 995.760114ms to provisionDockerMachine
	I0110 01:53:48.952961   15429 client.go:176] duration metric: took 12.368353236s to LocalClient.Create
	I0110 01:53:48.952985   15429 start.go:167] duration metric: took 12.368413447s to libmachine.API.Create "addons-600454"
	I0110 01:53:48.953001   15429 start.go:293] postStartSetup for "addons-600454" (driver="docker")
	I0110 01:53:48.953015   15429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 01:53:48.953084   15429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 01:53:48.953135   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:48.970285   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:53:49.061578   15429 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 01:53:49.065105   15429 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 01:53:49.065127   15429 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 01:53:49.065139   15429 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 01:53:49.065197   15429 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 01:53:49.065225   15429 start.go:296] duration metric: took 112.216989ms for postStartSetup
	I0110 01:53:49.065507   15429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-600454
	I0110 01:53:49.082529   15429 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/config.json ...
	I0110 01:53:49.082749   15429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 01:53:49.082787   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:49.099705   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:53:49.187344   15429 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 01:53:49.191308   15429 start.go:128] duration metric: took 12.608663855s to createHost
	I0110 01:53:49.191327   15429 start.go:83] releasing machines lock for "addons-600454", held for 12.608801874s
	I0110 01:53:49.191390   15429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-600454
	I0110 01:53:49.208904   15429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 01:53:49.208983   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:49.208901   15429 ssh_runner.go:195] Run: cat /version.json
	I0110 01:53:49.209063   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:53:49.226652   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:53:49.227657   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:53:49.365667   15429 ssh_runner.go:195] Run: systemctl --version
	I0110 01:53:49.371674   15429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 01:53:49.402190   15429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 01:53:49.406310   15429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 01:53:49.406363   15429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 01:53:49.428867   15429 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0110 01:53:49.428902   15429 start.go:496] detecting cgroup driver to use...
	I0110 01:53:49.428934   15429 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 01:53:49.428980   15429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 01:53:49.443237   15429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 01:53:49.453943   15429 docker.go:218] disabling cri-docker service (if available) ...
	I0110 01:53:49.453984   15429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 01:53:49.468426   15429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 01:53:49.484692   15429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 01:53:49.562781   15429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 01:53:49.644086   15429 docker.go:234] disabling docker service ...
	I0110 01:53:49.644142   15429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 01:53:49.660297   15429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 01:53:49.672457   15429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 01:53:49.751092   15429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 01:53:49.832487   15429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 01:53:49.844152   15429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 01:53:49.856681   15429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 01:53:49.856739   15429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:53:49.866021   15429 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 01:53:49.866068   15429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:53:49.874094   15429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:53:49.881931   15429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:53:49.889706   15429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 01:53:49.897310   15429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:53:49.904953   15429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:53:49.916828   15429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:53:49.924565   15429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 01:53:49.930942   15429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0110 01:53:49.930977   15429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0110 01:53:49.941938   15429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 01:53:49.948300   15429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 01:53:50.023610   15429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 01:53:50.153298   15429 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 01:53:50.153369   15429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 01:53:50.157103   15429 start.go:574] Will wait 60s for crictl version
	I0110 01:53:50.157152   15429 ssh_runner.go:195] Run: which crictl
	I0110 01:53:50.160642   15429 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 01:53:50.184323   15429 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 01:53:50.184423   15429 ssh_runner.go:195] Run: crio --version
	I0110 01:53:50.209694   15429 ssh_runner.go:195] Run: crio --version
	I0110 01:53:50.236384   15429 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 01:53:50.237462   15429 cli_runner.go:164] Run: docker network inspect addons-600454 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 01:53:50.253376   15429 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0110 01:53:50.257234   15429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 01:53:50.266717   15429 kubeadm.go:884] updating cluster {Name:addons-600454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-600454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 01:53:50.266813   15429 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 01:53:50.266851   15429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 01:53:50.299083   15429 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 01:53:50.299101   15429 crio.go:433] Images already preloaded, skipping extraction
	I0110 01:53:50.299146   15429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 01:53:50.322801   15429 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 01:53:50.322819   15429 cache_images.go:86] Images are preloaded, skipping loading
	I0110 01:53:50.322826   15429 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I0110 01:53:50.322938   15429 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-600454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-600454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 01:53:50.323014   15429 ssh_runner.go:195] Run: crio config
	I0110 01:53:50.367147   15429 cni.go:84] Creating CNI manager for ""
	I0110 01:53:50.367166   15429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 01:53:50.367182   15429 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 01:53:50.367207   15429 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-600454 NodeName:addons-600454 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 01:53:50.367355   15429 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-600454"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 01:53:50.367460   15429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 01:53:50.375340   15429 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 01:53:50.375413   15429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 01:53:50.382746   15429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0110 01:53:50.394217   15429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 01:53:50.408048   15429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0110 01:53:50.419454   15429 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0110 01:53:50.422599   15429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 01:53:50.431461   15429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 01:53:50.508386   15429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 01:53:50.531002   15429 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454 for IP: 192.168.49.2
	I0110 01:53:50.531026   15429 certs.go:195] generating shared ca certs ...
	I0110 01:53:50.531040   15429 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.531161   15429 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 01:53:50.712538   15429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt ...
	I0110 01:53:50.712565   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt: {Name:mk811e5fb144c7c05a21b015d05a6c7e74b5d3c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.712749   15429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key ...
	I0110 01:53:50.712770   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key: {Name:mk741547572ee409e1e3b1872313e3cc7157860a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.712894   15429 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 01:53:50.823998   15429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt ...
	I0110 01:53:50.824024   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt: {Name:mkd8d6424f090ffb347958e6d19e744670429d61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.824193   15429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key ...
	I0110 01:53:50.824204   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key: {Name:mkd899e5faa792140ae8e7891560ca246de8eb7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.824272   15429 certs.go:257] generating profile certs ...
	I0110 01:53:50.824330   15429 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.key
	I0110 01:53:50.824344   15429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt with IP's: []
	I0110 01:53:50.898083   15429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt ...
	I0110 01:53:50.898109   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: {Name:mk6f8289906b482c808e294b57d664ff97fafaad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.898252   15429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.key ...
	I0110 01:53:50.898262   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.key: {Name:mk168d2de6d7037c154ecf0216b6e059c8f8d1ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.898328   15429 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.key.995ec0be
	I0110 01:53:50.898352   15429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.crt.995ec0be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0110 01:53:50.943492   15429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.crt.995ec0be ...
	I0110 01:53:50.943513   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.crt.995ec0be: {Name:mkd26783c9dea7d0d3a665f307cf6403bd189040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.943634   15429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.key.995ec0be ...
	I0110 01:53:50.943649   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.key.995ec0be: {Name:mk6c06ff01f5312c15d7cc2a467dff50228f5713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:50.943721   15429 certs.go:382] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.crt.995ec0be -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.crt
	I0110 01:53:50.943794   15429 certs.go:386] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.key.995ec0be -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.key
	I0110 01:53:50.943837   15429 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/proxy-client.key
	I0110 01:53:50.943854   15429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/proxy-client.crt with IP's: []
	I0110 01:53:51.196857   15429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/proxy-client.crt ...
	I0110 01:53:51.196881   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/proxy-client.crt: {Name:mkdde788ad37329e7d5796834f07d24ce4fd65ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:51.197036   15429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/proxy-client.key ...
	I0110 01:53:51.197047   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/proxy-client.key: {Name:mk729b4b094f3e9bc3fc5b753d6aaa9fdb0de620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:51.197211   15429 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 01:53:51.197245   15429 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 01:53:51.197273   15429 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 01:53:51.197299   15429 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 01:53:51.197936   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 01:53:51.215483   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 01:53:51.231253   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 01:53:51.246800   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 01:53:51.262425   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 01:53:51.277570   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 01:53:51.292592   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 01:53:51.308049   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 01:53:51.323329   15429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 01:53:51.340800   15429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 01:53:51.352523   15429 ssh_runner.go:195] Run: openssl version
	I0110 01:53:51.358529   15429 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 01:53:51.365693   15429 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 01:53:51.375264   15429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 01:53:51.378689   15429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 01:53:51.378731   15429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 01:53:51.412427   15429 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 01:53:51.419015   15429 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 01:53:51.425347   15429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 01:53:51.428371   15429 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 01:53:51.428412   15429 kubeadm.go:401] StartCluster: {Name:addons-600454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-600454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 01:53:51.428488   15429 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:53:51.428526   15429 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:53:51.453035   15429 cri.go:96] found id: ""
	I0110 01:53:51.453085   15429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 01:53:51.460073   15429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 01:53:51.467041   15429 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 01:53:51.467099   15429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 01:53:51.473913   15429 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 01:53:51.473932   15429 kubeadm.go:158] found existing configuration files:
	
	I0110 01:53:51.473966   15429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 01:53:51.480797   15429 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 01:53:51.480848   15429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 01:53:51.487398   15429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 01:53:51.494011   15429 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 01:53:51.494046   15429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 01:53:51.500418   15429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 01:53:51.507153   15429 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 01:53:51.507197   15429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 01:53:51.513590   15429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 01:53:51.520194   15429 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 01:53:51.520226   15429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 01:53:51.526553   15429 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 01:53:51.614756   15429 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0110 01:53:51.665959   15429 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 01:53:58.796669   15429 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 01:53:58.796750   15429 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 01:53:58.796862   15429 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 01:53:58.796958   15429 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0110 01:53:58.797038   15429 kubeadm.go:319] OS: Linux
	I0110 01:53:58.797126   15429 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 01:53:58.797190   15429 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 01:53:58.797310   15429 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 01:53:58.797409   15429 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 01:53:58.797486   15429 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 01:53:58.797559   15429 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 01:53:58.797652   15429 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 01:53:58.797735   15429 kubeadm.go:319] CGROUPS_IO: enabled
	I0110 01:53:58.797864   15429 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 01:53:58.798029   15429 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 01:53:58.798142   15429 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 01:53:58.798267   15429 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 01:53:58.800734   15429 out.go:252]   - Generating certificates and keys ...
	I0110 01:53:58.800807   15429 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 01:53:58.800863   15429 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 01:53:58.800957   15429 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 01:53:58.801009   15429 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 01:53:58.801066   15429 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 01:53:58.801123   15429 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 01:53:58.801178   15429 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 01:53:58.801401   15429 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-600454 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0110 01:53:58.801493   15429 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 01:53:58.801618   15429 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-600454 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0110 01:53:58.801702   15429 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 01:53:58.801791   15429 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 01:53:58.801858   15429 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 01:53:58.801960   15429 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 01:53:58.802020   15429 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 01:53:58.802071   15429 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 01:53:58.802121   15429 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 01:53:58.802198   15429 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 01:53:58.802254   15429 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 01:53:58.802334   15429 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 01:53:58.802397   15429 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 01:53:58.803835   15429 out.go:252]   - Booting up control plane ...
	I0110 01:53:58.803948   15429 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 01:53:58.804047   15429 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 01:53:58.804122   15429 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 01:53:58.804245   15429 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 01:53:58.804342   15429 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 01:53:58.804440   15429 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 01:53:58.804524   15429 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 01:53:58.804559   15429 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 01:53:58.804678   15429 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 01:53:58.804787   15429 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 01:53:58.804840   15429 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.831833ms
	I0110 01:53:58.804952   15429 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 01:53:58.805065   15429 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0110 01:53:58.805146   15429 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 01:53:58.805216   15429 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 01:53:58.805282   15429 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.681578ms
	I0110 01:53:58.805346   15429 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.11685659s
	I0110 01:53:58.805411   15429 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001048703s
	I0110 01:53:58.805519   15429 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 01:53:58.805688   15429 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 01:53:58.805779   15429 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 01:53:58.806020   15429 kubeadm.go:319] [mark-control-plane] Marking the node addons-600454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 01:53:58.806072   15429 kubeadm.go:319] [bootstrap-token] Using token: 88vevv.vyn1cntcdo4hht97
	I0110 01:53:58.807985   15429 out.go:252]   - Configuring RBAC rules ...
	I0110 01:53:58.808092   15429 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 01:53:58.808188   15429 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 01:53:58.808376   15429 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 01:53:58.808541   15429 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 01:53:58.808658   15429 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 01:53:58.808744   15429 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 01:53:58.808841   15429 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 01:53:58.808880   15429 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 01:53:58.808934   15429 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 01:53:58.808937   15429 kubeadm.go:319] 
	I0110 01:53:58.808989   15429 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 01:53:58.809016   15429 kubeadm.go:319] 
	I0110 01:53:58.809086   15429 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 01:53:58.809093   15429 kubeadm.go:319] 
	I0110 01:53:58.809113   15429 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 01:53:58.809171   15429 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 01:53:58.809218   15429 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 01:53:58.809223   15429 kubeadm.go:319] 
	I0110 01:53:58.809276   15429 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 01:53:58.809284   15429 kubeadm.go:319] 
	I0110 01:53:58.809323   15429 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 01:53:58.809329   15429 kubeadm.go:319] 
	I0110 01:53:58.809390   15429 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 01:53:58.809458   15429 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 01:53:58.809517   15429 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 01:53:58.809523   15429 kubeadm.go:319] 
	I0110 01:53:58.809622   15429 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 01:53:58.809737   15429 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 01:53:58.809750   15429 kubeadm.go:319] 
	I0110 01:53:58.809865   15429 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 88vevv.vyn1cntcdo4hht97 \
	I0110 01:53:58.809973   15429 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 \
	I0110 01:53:58.809993   15429 kubeadm.go:319] 	--control-plane 
	I0110 01:53:58.809999   15429 kubeadm.go:319] 
	I0110 01:53:58.810078   15429 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 01:53:58.810086   15429 kubeadm.go:319] 
	I0110 01:53:58.810154   15429 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 88vevv.vyn1cntcdo4hht97 \
	I0110 01:53:58.810252   15429 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 
	I0110 01:53:58.810262   15429 cni.go:84] Creating CNI manager for ""
	I0110 01:53:58.810268   15429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 01:53:58.811527   15429 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 01:53:58.812662   15429 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 01:53:58.816724   15429 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 01:53:58.816740   15429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 01:53:58.830270   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 01:53:59.019519   15429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 01:53:59.019600   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:53:59.019600   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-600454 minikube.k8s.io/updated_at=2026_01_10T01_53_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=addons-600454 minikube.k8s.io/primary=true
	I0110 01:53:59.030811   15429 ops.go:34] apiserver oom_adj: -16
	I0110 01:53:59.108096   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:53:59.608736   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:00.108144   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:00.608337   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:01.109071   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:01.608495   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:02.108755   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:02.609035   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:03.109067   15429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:03.172742   15429 kubeadm.go:1114] duration metric: took 4.15320361s to wait for elevateKubeSystemPrivileges
	I0110 01:54:03.172778   15429 kubeadm.go:403] duration metric: took 11.744369387s to StartCluster
	I0110 01:54:03.172801   15429 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:03.172937   15429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 01:54:03.173386   15429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:03.173593   15429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 01:54:03.173621   15429 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 01:54:03.173690   15429 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0110 01:54:03.173833   15429 addons.go:70] Setting yakd=true in profile "addons-600454"
	I0110 01:54:03.173849   15429 addons.go:70] Setting default-storageclass=true in profile "addons-600454"
	I0110 01:54:03.173864   15429 addons.go:70] Setting inspektor-gadget=true in profile "addons-600454"
	I0110 01:54:03.173871   15429 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:54:03.173882   15429 addons.go:70] Setting cloud-spanner=true in profile "addons-600454"
	I0110 01:54:03.173896   15429 addons.go:70] Setting ingress=true in profile "addons-600454"
	I0110 01:54:03.173910   15429 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-600454"
	I0110 01:54:03.173874   15429 addons.go:239] Setting addon inspektor-gadget=true in "addons-600454"
	I0110 01:54:03.173929   15429 addons.go:70] Setting registry=true in profile "addons-600454"
	I0110 01:54:03.173933   15429 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-600454"
	I0110 01:54:03.173947   15429 addons.go:239] Setting addon registry=true in "addons-600454"
	I0110 01:54:03.173945   15429 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-600454"
	I0110 01:54:03.173955   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.173966   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.173975   15429 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-600454"
	I0110 01:54:03.173994   15429 addons.go:70] Setting storage-provisioner=true in profile "addons-600454"
	I0110 01:54:03.174003   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.174008   15429 addons.go:239] Setting addon storage-provisioner=true in "addons-600454"
	I0110 01:54:03.174006   15429 addons.go:70] Setting volcano=true in profile "addons-600454"
	I0110 01:54:03.174025   15429 addons.go:239] Setting addon volcano=true in "addons-600454"
	I0110 01:54:03.174034   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.173845   15429 addons.go:70] Setting ingress-dns=true in profile "addons-600454"
	I0110 01:54:03.174051   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.174067   15429 addons.go:239] Setting addon ingress-dns=true in "addons-600454"
	I0110 01:54:03.173855   15429 addons.go:239] Setting addon yakd=true in "addons-600454"
	I0110 01:54:03.174094   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.174135   15429 addons.go:239] Setting addon ingress=true in "addons-600454"
	I0110 01:54:03.174164   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.174270   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.174389   15429 addons.go:70] Setting volumesnapshots=true in profile "addons-600454"
	I0110 01:54:03.174485   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.174490   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.174503   15429 addons.go:239] Setting addon volumesnapshots=true in "addons-600454"
	I0110 01:54:03.174506   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.174526   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.174538   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.174586   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.174979   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.175108   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.173873   15429 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-600454"
	I0110 01:54:03.173977   15429 addons.go:70] Setting registry-creds=true in profile "addons-600454"
	I0110 01:54:03.175757   15429 addons.go:239] Setting addon registry-creds=true in "addons-600454"
	I0110 01:54:03.175783   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.176264   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.173967   15429 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-600454"
	I0110 01:54:03.176597   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.173882   15429 addons.go:70] Setting metrics-server=true in profile "addons-600454"
	I0110 01:54:03.177460   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.176784   15429 out.go:179] * Verifying Kubernetes components...
	I0110 01:54:03.177527   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.173835   15429 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-600454"
	I0110 01:54:03.180853   15429 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-600454"
	I0110 01:54:03.180905   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.181582   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.177777   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.173873   15429 addons.go:70] Setting gcp-auth=true in profile "addons-600454"
	I0110 01:54:03.181841   15429 mustload.go:66] Loading cluster: addons-600454
	I0110 01:54:03.173925   15429 addons.go:239] Setting addon cloud-spanner=true in "addons-600454"
	I0110 01:54:03.181951   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.182041   15429 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:54:03.173922   15429 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-600454"
	I0110 01:54:03.182081   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.182297   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.182416   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.177809   15429 addons.go:239] Setting addon metrics-server=true in "addons-600454"
	I0110 01:54:03.182942   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.183392   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.185562   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.185759   15429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 01:54:03.253609   15429 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I0110 01:54:03.255627   15429 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0110 01:54:03.255648   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0110 01:54:03.255711   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.257234   15429 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I0110 01:54:03.260255   15429 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0110 01:54:03.260275   15429 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0110 01:54:03.260335   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	W0110 01:54:03.262174   15429 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0110 01:54:03.262329   15429 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 01:54:03.264934   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0110 01:54:03.265035   15429 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0110 01:54:03.265068   15429 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 01:54:03.266385   15429 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0110 01:54:03.266413   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0110 01:54:03.266481   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.266759   15429 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 01:54:03.266774   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 01:54:03.266812   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.266910   15429 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0110 01:54:03.266925   15429 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0110 01:54:03.266974   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.267050   15429 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I0110 01:54:03.272385   15429 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 01:54:03.272440   15429 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0110 01:54:03.273575   15429 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0110 01:54:03.273596   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I0110 01:54:03.273645   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.275663   15429 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I0110 01:54:03.275770   15429 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0110 01:54:03.276654   15429 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-600454"
	I0110 01:54:03.276695   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.277307   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.277529   15429 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0110 01:54:03.277767   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0110 01:54:03.277815   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.277578   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0110 01:54:03.278996   15429 out.go:179]   - Using image docker.io/registry:3.0.0
	I0110 01:54:03.279656   15429 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0110 01:54:03.279679   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0110 01:54:03.279740   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.280294   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0110 01:54:03.280917   15429 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I0110 01:54:03.280935   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0110 01:54:03.280984   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.282654   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0110 01:54:03.283662   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0110 01:54:03.284627   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0110 01:54:03.285626   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0110 01:54:03.287088   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.288016   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0110 01:54:03.289129   15429 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0110 01:54:03.290008   15429 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0110 01:54:03.290029   15429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0110 01:54:03.290088   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.298794   15429 addons.go:239] Setting addon default-storageclass=true in "addons-600454"
	I0110 01:54:03.298836   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:03.299339   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:03.302034   15429 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0110 01:54:03.303028   15429 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0110 01:54:03.303053   15429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0110 01:54:03.303114   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.307232   15429 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0110 01:54:03.311961   15429 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0110 01:54:03.311984   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0110 01:54:03.312038   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.316050   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.328025   15429 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I0110 01:54:03.328917   15429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 01:54:03.330106   15429 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I0110 01:54:03.330122   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0110 01:54:03.330176   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.344032   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.346638   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.349760   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.351580   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.352384   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.360126   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.362306   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.375863   15429 out.go:179]   - Using image docker.io/busybox:stable
	I0110 01:54:03.378954   15429 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0110 01:54:03.380021   15429 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0110 01:54:03.380044   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0110 01:54:03.380100   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.385054   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.387006   15429 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 01:54:03.387051   15429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 01:54:03.387107   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:03.390989   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.391465   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.391569   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.402280   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:03.405968   15429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 01:54:03.413053   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	W0110 01:54:03.415036   15429 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0110 01:54:03.415078   15429 retry.go:84] will retry after 200ms: ssh: handshake failed: EOF
	I0110 01:54:03.418719   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	W0110 01:54:03.419680   15429 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0110 01:54:03.476321   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 01:54:03.504273   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0110 01:54:03.524194   15429 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I0110 01:54:03.524216   15429 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0110 01:54:03.533836   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0110 01:54:03.533855   15429 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0110 01:54:03.533870   15429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0110 01:54:03.541706   15429 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0110 01:54:03.541725   15429 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0110 01:54:03.552782   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0110 01:54:03.555654   15429 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0110 01:54:03.555680   15429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0110 01:54:03.555710   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0110 01:54:03.556380   15429 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0110 01:54:03.556394   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0110 01:54:03.556484   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I0110 01:54:03.566190   15429 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0110 01:54:03.566267   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0110 01:54:03.566970   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0110 01:54:03.571444   15429 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0110 01:54:03.571465   15429 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0110 01:54:03.575409   15429 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0110 01:54:03.575430   15429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0110 01:54:03.582379   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0110 01:54:03.602465   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0110 01:54:03.605130   15429 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0110 01:54:03.605152   15429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0110 01:54:03.606321   15429 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0110 01:54:03.606343   15429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0110 01:54:03.628604   15429 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0110 01:54:03.628653   15429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0110 01:54:03.633173   15429 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0110 01:54:03.633211   15429 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0110 01:54:03.668855   15429 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0110 01:54:03.668893   15429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0110 01:54:03.679941   15429 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0110 01:54:03.679961   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I0110 01:54:03.696106   15429 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0110 01:54:03.696137   15429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0110 01:54:03.700414   15429 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0110 01:54:03.700483   15429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0110 01:54:03.703436   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0110 01:54:03.729814   15429 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0110 01:54:03.731783   15429 node_ready.go:35] waiting up to 6m0s for node "addons-600454" to be "Ready" ...
	I0110 01:54:03.749809   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0110 01:54:03.755940   15429 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0110 01:54:03.755985   15429 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0110 01:54:03.768791   15429 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0110 01:54:03.768814   15429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0110 01:54:03.794118   15429 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0110 01:54:03.794200   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0110 01:54:03.822487   15429 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 01:54:03.822509   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0110 01:54:03.877068   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0110 01:54:03.878404   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 01:54:03.883512   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 01:54:03.889141   15429 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0110 01:54:03.889162   15429 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0110 01:54:03.936811   15429 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0110 01:54:03.936834   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0110 01:54:03.961805   15429 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0110 01:54:03.961825   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0110 01:54:04.016078   15429 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0110 01:54:04.016161   15429 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0110 01:54:04.061090   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0110 01:54:04.240290   15429 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-600454" context rescaled to 1 replicas
	I0110 01:54:04.762754   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.209936946s)
	I0110 01:54:04.762794   15429 addons.go:495] Verifying addon ingress=true in "addons-600454"
	I0110 01:54:04.763531   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.207798666s)
	I0110 01:54:04.763688   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.207184189s)
	I0110 01:54:04.763723   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.196732889s)
	I0110 01:54:04.763767   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.181369153s)
	I0110 01:54:04.763803   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161311904s)
	I0110 01:54:04.764138   15429 addons.go:495] Verifying addon registry=true in "addons-600454"
	I0110 01:54:04.764312   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.014458916s)
	I0110 01:54:04.764350   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060564491s)
	I0110 01:54:04.764927   15429 addons.go:495] Verifying addon metrics-server=true in "addons-600454"
	I0110 01:54:04.764899   15429 out.go:179] * Verifying ingress addon...
	I0110 01:54:04.766089   15429 out.go:179] * Verifying registry addon...
	I0110 01:54:04.766830   15429 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-600454 service yakd-dashboard -n yakd-dashboard
	
	I0110 01:54:04.766858   15429 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0110 01:54:04.767939   15429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0110 01:54:04.769996   15429 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0110 01:54:04.771037   15429 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0110 01:54:04.771056   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:05.274423   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:05.275560   15429 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0110 01:54:05.275625   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:05.305165   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.426675135s)
	I0110 01:54:05.305244   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.421705909s)
	W0110 01:54:05.305280   15429 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0110 01:54:05.305564   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.244431925s)
	I0110 01:54:05.305590   15429 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-600454"
	I0110 01:54:05.306908   15429 out.go:179] * Verifying csi-hostpath-driver addon...
	I0110 01:54:05.309919   15429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0110 01:54:05.315849   15429 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0110 01:54:05.315867   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:05.451575   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0110 01:54:05.734134   15429 node_ready.go:57] node "addons-600454" has "Ready":"False" status (will retry)
	I0110 01:54:05.770561   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:05.770780   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:05.813258   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:06.269700   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:06.270110   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:06.369966   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:06.770633   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:06.770801   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:06.813012   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:07.270299   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:07.270550   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:07.370966   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0110 01:54:07.734526   15429 node_ready.go:57] node "addons-600454" has "Ready":"False" status (will retry)
	I0110 01:54:07.771430   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:07.771504   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:07.812526   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:07.903196   15429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.451567862s)
	I0110 01:54:08.270235   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:08.270314   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:08.371587   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:08.770664   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:08.770755   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:08.813021   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:09.270506   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:09.270644   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:09.371444   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:09.770230   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:09.770537   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:09.812862   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0110 01:54:10.234384   15429 node_ready.go:57] node "addons-600454" has "Ready":"False" status (will retry)
	I0110 01:54:10.269994   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:10.270429   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:10.370949   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:10.770199   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:10.770510   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:10.812985   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:10.893128   15429 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0110 01:54:10.893199   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:10.911194   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:11.007906   15429 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0110 01:54:11.019603   15429 addons.go:239] Setting addon gcp-auth=true in "addons-600454"
	I0110 01:54:11.019649   15429 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:54:11.020080   15429 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:54:11.037095   15429 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0110 01:54:11.037136   15429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:54:11.052860   15429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:54:11.142582   15429 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 01:54:11.143858   15429 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0110 01:54:11.144764   15429 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0110 01:54:11.144778   15429 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0110 01:54:11.157349   15429 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0110 01:54:11.157368   15429 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0110 01:54:11.169576   15429 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0110 01:54:11.169592   15429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0110 01:54:11.181596   15429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0110 01:54:11.271007   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:11.271176   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:11.312795   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:11.469484   15429 addons.go:495] Verifying addon gcp-auth=true in "addons-600454"
	I0110 01:54:11.470522   15429 out.go:179] * Verifying gcp-auth addon...
	I0110 01:54:11.472237   15429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0110 01:54:11.475017   15429 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0110 01:54:11.475030   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:11.769870   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:11.770782   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:11.813394   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:11.974693   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:12.270139   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:12.270332   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:12.312392   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:12.474825   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0110 01:54:12.734299   15429 node_ready.go:57] node "addons-600454" has "Ready":"False" status (will retry)
	I0110 01:54:12.769841   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:12.770297   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:12.812772   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:12.975181   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:13.269714   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:13.270488   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:13.312484   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:13.475023   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:13.769841   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:13.770683   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:13.813108   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:13.975540   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:14.270803   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:14.271038   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:14.313163   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:14.475537   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0110 01:54:14.734984   15429 node_ready.go:57] node "addons-600454" has "Ready":"False" status (will retry)
	I0110 01:54:14.770162   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:14.770169   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:14.814387   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:14.974923   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:15.270398   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:15.270454   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:15.312435   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:15.474776   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:15.769667   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:15.770301   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:15.812685   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:15.975243   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:16.236860   15429 node_ready.go:49] node "addons-600454" is "Ready"
	I0110 01:54:16.236915   15429 node_ready.go:38] duration metric: took 12.505109091s for node "addons-600454" to be "Ready" ...
	I0110 01:54:16.237119   15429 api_server.go:52] waiting for apiserver process to appear ...
	I0110 01:54:16.237201   15429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 01:54:16.256092   15429 api_server.go:72] duration metric: took 13.082435885s to wait for apiserver process to appear ...
	I0110 01:54:16.256168   15429 api_server.go:88] waiting for apiserver healthz status ...
	I0110 01:54:16.256196   15429 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0110 01:54:16.263028   15429 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0110 01:54:16.263990   15429 api_server.go:141] control plane version: v1.35.0
	I0110 01:54:16.264015   15429 api_server.go:131] duration metric: took 7.834385ms to wait for apiserver health ...
	I0110 01:54:16.264026   15429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 01:54:16.272609   15429 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0110 01:54:16.272629   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:16.272865   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:16.273849   15429 system_pods.go:59] 20 kube-system pods found
	I0110 01:54:16.273904   15429 system_pods.go:61] "amd-gpu-device-plugin-r27zc" [61f0da90-77f5-49e4-ab9f-eda90d4e04ea] Pending
	I0110 01:54:16.273919   15429 system_pods.go:61] "coredns-7d764666f9-zhk8p" [fccf2c93-6c49-43c6-937a-5b05f1f2f018] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 01:54:16.273929   15429 system_pods.go:61] "csi-hostpath-attacher-0" [d808a7e6-5d96-4d63-8a02-98f61dae8b39] Pending
	I0110 01:54:16.273942   15429 system_pods.go:61] "csi-hostpath-resizer-0" [f0e0eb04-cd83-4f67-8f0f-ce50a6df09ad] Pending
	I0110 01:54:16.273947   15429 system_pods.go:61] "csi-hostpathplugin-9bjcs" [0cd76d52-c549-4655-8a86-d10512fb7bd2] Pending
	I0110 01:54:16.273955   15429 system_pods.go:61] "etcd-addons-600454" [45a99124-ab40-4d88-876d-fafda6c9a126] Running
	I0110 01:54:16.273960   15429 system_pods.go:61] "kindnet-nw7pc" [b5eb404a-0888-4dd0-873e-c644c974660c] Running
	I0110 01:54:16.273969   15429 system_pods.go:61] "kube-apiserver-addons-600454" [c7b16823-4da2-429d-a189-8c63bc51318e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 01:54:16.274012   15429 system_pods.go:61] "kube-controller-manager-addons-600454" [f4882c9a-36bb-435b-88e5-c21ca63bcc0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 01:54:16.274025   15429 system_pods.go:61] "kube-ingress-dns-minikube" [306271d0-7ec7-4b01-afba-2d85534d0b6b] Pending
	I0110 01:54:16.274031   15429 system_pods.go:61] "kube-proxy-n6xgk" [2300e881-f633-4ae4-833b-cde1e63efd3a] Running
	I0110 01:54:16.274036   15429 system_pods.go:61] "kube-scheduler-addons-600454" [9c4d93dd-ffe8-4a50-8c04-0562e44de0f6] Running
	I0110 01:54:16.274043   15429 system_pods.go:61] "metrics-server-5778bb4788-pj8xt" [81ec1b8b-b693-480e-b5c6-1d50cb816a02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 01:54:16.274052   15429 system_pods.go:61] "nvidia-device-plugin-daemonset-842xc" [56576535-28b2-4154-90d5-fae5922238e3] Pending
	I0110 01:54:16.274058   15429 system_pods.go:61] "registry-788cd7d5bc-mlf8m" [94bf43da-60ab-405e-8e3c-ba8318d37ad2] Pending
	I0110 01:54:16.274065   15429 system_pods.go:61] "registry-creds-567fb78d95-jkz6h" [42464cff-62ad-44e8-8ad0-a33b9cd7ff90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:16.274072   15429 system_pods.go:61] "registry-proxy-zx8d4" [591ae63a-173e-4fb4-89b9-9fd8522cb1c1] Pending
	I0110 01:54:16.274078   15429 system_pods.go:61] "snapshot-controller-6588d87457-f4v27" [0efdaebf-eda7-4adc-a7db-a6df464e261e] Pending
	I0110 01:54:16.274086   15429 system_pods.go:61] "snapshot-controller-6588d87457-ts292" [50162603-bc54-4d0e-8ae6-4a688aa837dd] Pending
	I0110 01:54:16.274091   15429 system_pods.go:61] "storage-provisioner" [516be01a-36c3-4ba8-b497-8be331085010] Pending
	I0110 01:54:16.274100   15429 system_pods.go:74] duration metric: took 10.06837ms to wait for pod list to return data ...
	I0110 01:54:16.274113   15429 default_sa.go:34] waiting for default service account to be created ...
	I0110 01:54:16.275956   15429 default_sa.go:45] found service account: "default"
	I0110 01:54:16.275985   15429 default_sa.go:55] duration metric: took 1.8537ms for default service account to be created ...
	I0110 01:54:16.276002   15429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 01:54:16.279112   15429 system_pods.go:86] 20 kube-system pods found
	I0110 01:54:16.279135   15429 system_pods.go:89] "amd-gpu-device-plugin-r27zc" [61f0da90-77f5-49e4-ab9f-eda90d4e04ea] Pending
	I0110 01:54:16.279146   15429 system_pods.go:89] "coredns-7d764666f9-zhk8p" [fccf2c93-6c49-43c6-937a-5b05f1f2f018] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 01:54:16.279156   15429 system_pods.go:89] "csi-hostpath-attacher-0" [d808a7e6-5d96-4d63-8a02-98f61dae8b39] Pending
	I0110 01:54:16.279166   15429 system_pods.go:89] "csi-hostpath-resizer-0" [f0e0eb04-cd83-4f67-8f0f-ce50a6df09ad] Pending
	I0110 01:54:16.279174   15429 system_pods.go:89] "csi-hostpathplugin-9bjcs" [0cd76d52-c549-4655-8a86-d10512fb7bd2] Pending
	I0110 01:54:16.279179   15429 system_pods.go:89] "etcd-addons-600454" [45a99124-ab40-4d88-876d-fafda6c9a126] Running
	I0110 01:54:16.279189   15429 system_pods.go:89] "kindnet-nw7pc" [b5eb404a-0888-4dd0-873e-c644c974660c] Running
	I0110 01:54:16.279197   15429 system_pods.go:89] "kube-apiserver-addons-600454" [c7b16823-4da2-429d-a189-8c63bc51318e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 01:54:16.279208   15429 system_pods.go:89] "kube-controller-manager-addons-600454" [f4882c9a-36bb-435b-88e5-c21ca63bcc0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 01:54:16.279216   15429 system_pods.go:89] "kube-ingress-dns-minikube" [306271d0-7ec7-4b01-afba-2d85534d0b6b] Pending
	I0110 01:54:16.279220   15429 system_pods.go:89] "kube-proxy-n6xgk" [2300e881-f633-4ae4-833b-cde1e63efd3a] Running
	I0110 01:54:16.279234   15429 system_pods.go:89] "kube-scheduler-addons-600454" [9c4d93dd-ffe8-4a50-8c04-0562e44de0f6] Running
	I0110 01:54:16.279245   15429 system_pods.go:89] "metrics-server-5778bb4788-pj8xt" [81ec1b8b-b693-480e-b5c6-1d50cb816a02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 01:54:16.279254   15429 system_pods.go:89] "nvidia-device-plugin-daemonset-842xc" [56576535-28b2-4154-90d5-fae5922238e3] Pending
	I0110 01:54:16.279266   15429 system_pods.go:89] "registry-788cd7d5bc-mlf8m" [94bf43da-60ab-405e-8e3c-ba8318d37ad2] Pending
	I0110 01:54:16.279277   15429 system_pods.go:89] "registry-creds-567fb78d95-jkz6h" [42464cff-62ad-44e8-8ad0-a33b9cd7ff90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:16.279282   15429 system_pods.go:89] "registry-proxy-zx8d4" [591ae63a-173e-4fb4-89b9-9fd8522cb1c1] Pending
	I0110 01:54:16.279292   15429 system_pods.go:89] "snapshot-controller-6588d87457-f4v27" [0efdaebf-eda7-4adc-a7db-a6df464e261e] Pending
	I0110 01:54:16.279298   15429 system_pods.go:89] "snapshot-controller-6588d87457-ts292" [50162603-bc54-4d0e-8ae6-4a688aa837dd] Pending
	I0110 01:54:16.279307   15429 system_pods.go:89] "storage-provisioner" [516be01a-36c3-4ba8-b497-8be331085010] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 01:54:16.279327   15429 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 01:54:16.312604   15429 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0110 01:54:16.312625   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:16.474696   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:16.518513   15429 system_pods.go:86] 20 kube-system pods found
	I0110 01:54:16.518561   15429 system_pods.go:89] "amd-gpu-device-plugin-r27zc" [61f0da90-77f5-49e4-ab9f-eda90d4e04ea] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0110 01:54:16.518575   15429 system_pods.go:89] "coredns-7d764666f9-zhk8p" [fccf2c93-6c49-43c6-937a-5b05f1f2f018] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 01:54:16.518588   15429 system_pods.go:89] "csi-hostpath-attacher-0" [d808a7e6-5d96-4d63-8a02-98f61dae8b39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 01:54:16.518602   15429 system_pods.go:89] "csi-hostpath-resizer-0" [f0e0eb04-cd83-4f67-8f0f-ce50a6df09ad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 01:54:16.518618   15429 system_pods.go:89] "csi-hostpathplugin-9bjcs" [0cd76d52-c549-4655-8a86-d10512fb7bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 01:54:16.518627   15429 system_pods.go:89] "etcd-addons-600454" [45a99124-ab40-4d88-876d-fafda6c9a126] Running
	I0110 01:54:16.518635   15429 system_pods.go:89] "kindnet-nw7pc" [b5eb404a-0888-4dd0-873e-c644c974660c] Running
	I0110 01:54:16.518650   15429 system_pods.go:89] "kube-apiserver-addons-600454" [c7b16823-4da2-429d-a189-8c63bc51318e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 01:54:16.518665   15429 system_pods.go:89] "kube-controller-manager-addons-600454" [f4882c9a-36bb-435b-88e5-c21ca63bcc0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 01:54:16.518683   15429 system_pods.go:89] "kube-ingress-dns-minikube" [306271d0-7ec7-4b01-afba-2d85534d0b6b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 01:54:16.518698   15429 system_pods.go:89] "kube-proxy-n6xgk" [2300e881-f633-4ae4-833b-cde1e63efd3a] Running
	I0110 01:54:16.518735   15429 system_pods.go:89] "kube-scheduler-addons-600454" [9c4d93dd-ffe8-4a50-8c04-0562e44de0f6] Running
	I0110 01:54:16.519123   15429 system_pods.go:89] "metrics-server-5778bb4788-pj8xt" [81ec1b8b-b693-480e-b5c6-1d50cb816a02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 01:54:16.519136   15429 system_pods.go:89] "nvidia-device-plugin-daemonset-842xc" [56576535-28b2-4154-90d5-fae5922238e3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 01:54:16.519143   15429 system_pods.go:89] "registry-788cd7d5bc-mlf8m" [94bf43da-60ab-405e-8e3c-ba8318d37ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 01:54:16.519149   15429 system_pods.go:89] "registry-creds-567fb78d95-jkz6h" [42464cff-62ad-44e8-8ad0-a33b9cd7ff90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:16.519156   15429 system_pods.go:89] "registry-proxy-zx8d4" [591ae63a-173e-4fb4-89b9-9fd8522cb1c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 01:54:16.519164   15429 system_pods.go:89] "snapshot-controller-6588d87457-f4v27" [0efdaebf-eda7-4adc-a7db-a6df464e261e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:16.519173   15429 system_pods.go:89] "snapshot-controller-6588d87457-ts292" [50162603-bc54-4d0e-8ae6-4a688aa837dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:16.519179   15429 system_pods.go:89] "storage-provisioner" [516be01a-36c3-4ba8-b497-8be331085010] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 01:54:16.770237   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:16.771564   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:16.872452   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:16.873732   15429 system_pods.go:86] 20 kube-system pods found
	I0110 01:54:16.873766   15429 system_pods.go:89] "amd-gpu-device-plugin-r27zc" [61f0da90-77f5-49e4-ab9f-eda90d4e04ea] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0110 01:54:16.873776   15429 system_pods.go:89] "coredns-7d764666f9-zhk8p" [fccf2c93-6c49-43c6-937a-5b05f1f2f018] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 01:54:16.873784   15429 system_pods.go:89] "csi-hostpath-attacher-0" [d808a7e6-5d96-4d63-8a02-98f61dae8b39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 01:54:16.873791   15429 system_pods.go:89] "csi-hostpath-resizer-0" [f0e0eb04-cd83-4f67-8f0f-ce50a6df09ad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 01:54:16.873798   15429 system_pods.go:89] "csi-hostpathplugin-9bjcs" [0cd76d52-c549-4655-8a86-d10512fb7bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 01:54:16.873809   15429 system_pods.go:89] "etcd-addons-600454" [45a99124-ab40-4d88-876d-fafda6c9a126] Running
	I0110 01:54:16.873818   15429 system_pods.go:89] "kindnet-nw7pc" [b5eb404a-0888-4dd0-873e-c644c974660c] Running
	I0110 01:54:16.873830   15429 system_pods.go:89] "kube-apiserver-addons-600454" [c7b16823-4da2-429d-a189-8c63bc51318e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 01:54:16.873839   15429 system_pods.go:89] "kube-controller-manager-addons-600454" [f4882c9a-36bb-435b-88e5-c21ca63bcc0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 01:54:16.873850   15429 system_pods.go:89] "kube-ingress-dns-minikube" [306271d0-7ec7-4b01-afba-2d85534d0b6b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 01:54:16.873858   15429 system_pods.go:89] "kube-proxy-n6xgk" [2300e881-f633-4ae4-833b-cde1e63efd3a] Running
	I0110 01:54:16.873864   15429 system_pods.go:89] "kube-scheduler-addons-600454" [9c4d93dd-ffe8-4a50-8c04-0562e44de0f6] Running
	I0110 01:54:16.873872   15429 system_pods.go:89] "metrics-server-5778bb4788-pj8xt" [81ec1b8b-b693-480e-b5c6-1d50cb816a02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 01:54:16.873878   15429 system_pods.go:89] "nvidia-device-plugin-daemonset-842xc" [56576535-28b2-4154-90d5-fae5922238e3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 01:54:16.873911   15429 system_pods.go:89] "registry-788cd7d5bc-mlf8m" [94bf43da-60ab-405e-8e3c-ba8318d37ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 01:54:16.873924   15429 system_pods.go:89] "registry-creds-567fb78d95-jkz6h" [42464cff-62ad-44e8-8ad0-a33b9cd7ff90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:16.873931   15429 system_pods.go:89] "registry-proxy-zx8d4" [591ae63a-173e-4fb4-89b9-9fd8522cb1c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 01:54:16.873942   15429 system_pods.go:89] "snapshot-controller-6588d87457-f4v27" [0efdaebf-eda7-4adc-a7db-a6df464e261e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:16.873953   15429 system_pods.go:89] "snapshot-controller-6588d87457-ts292" [50162603-bc54-4d0e-8ae6-4a688aa837dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:16.873965   15429 system_pods.go:89] "storage-provisioner" [516be01a-36c3-4ba8-b497-8be331085010] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 01:54:16.976582   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:17.200356   15429 system_pods.go:86] 20 kube-system pods found
	I0110 01:54:17.200395   15429 system_pods.go:89] "amd-gpu-device-plugin-r27zc" [61f0da90-77f5-49e4-ab9f-eda90d4e04ea] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0110 01:54:17.200404   15429 system_pods.go:89] "coredns-7d764666f9-zhk8p" [fccf2c93-6c49-43c6-937a-5b05f1f2f018] Running
	I0110 01:54:17.200417   15429 system_pods.go:89] "csi-hostpath-attacher-0" [d808a7e6-5d96-4d63-8a02-98f61dae8b39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 01:54:17.200430   15429 system_pods.go:89] "csi-hostpath-resizer-0" [f0e0eb04-cd83-4f67-8f0f-ce50a6df09ad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 01:54:17.200439   15429 system_pods.go:89] "csi-hostpathplugin-9bjcs" [0cd76d52-c549-4655-8a86-d10512fb7bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 01:54:17.200455   15429 system_pods.go:89] "etcd-addons-600454" [45a99124-ab40-4d88-876d-fafda6c9a126] Running
	I0110 01:54:17.200461   15429 system_pods.go:89] "kindnet-nw7pc" [b5eb404a-0888-4dd0-873e-c644c974660c] Running
	I0110 01:54:17.200470   15429 system_pods.go:89] "kube-apiserver-addons-600454" [c7b16823-4da2-429d-a189-8c63bc51318e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 01:54:17.200477   15429 system_pods.go:89] "kube-controller-manager-addons-600454" [f4882c9a-36bb-435b-88e5-c21ca63bcc0c] Running
	I0110 01:54:17.200486   15429 system_pods.go:89] "kube-ingress-dns-minikube" [306271d0-7ec7-4b01-afba-2d85534d0b6b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 01:54:17.200495   15429 system_pods.go:89] "kube-proxy-n6xgk" [2300e881-f633-4ae4-833b-cde1e63efd3a] Running
	I0110 01:54:17.200502   15429 system_pods.go:89] "kube-scheduler-addons-600454" [9c4d93dd-ffe8-4a50-8c04-0562e44de0f6] Running
	I0110 01:54:17.200514   15429 system_pods.go:89] "metrics-server-5778bb4788-pj8xt" [81ec1b8b-b693-480e-b5c6-1d50cb816a02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 01:54:17.200523   15429 system_pods.go:89] "nvidia-device-plugin-daemonset-842xc" [56576535-28b2-4154-90d5-fae5922238e3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 01:54:17.200533   15429 system_pods.go:89] "registry-788cd7d5bc-mlf8m" [94bf43da-60ab-405e-8e3c-ba8318d37ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 01:54:17.200545   15429 system_pods.go:89] "registry-creds-567fb78d95-jkz6h" [42464cff-62ad-44e8-8ad0-a33b9cd7ff90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:17.200557   15429 system_pods.go:89] "registry-proxy-zx8d4" [591ae63a-173e-4fb4-89b9-9fd8522cb1c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 01:54:17.200568   15429 system_pods.go:89] "snapshot-controller-6588d87457-f4v27" [0efdaebf-eda7-4adc-a7db-a6df464e261e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:17.200580   15429 system_pods.go:89] "snapshot-controller-6588d87457-ts292" [50162603-bc54-4d0e-8ae6-4a688aa837dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:17.200589   15429 system_pods.go:89] "storage-provisioner" [516be01a-36c3-4ba8-b497-8be331085010] Running
	I0110 01:54:17.200599   15429 system_pods.go:126] duration metric: took 924.589367ms to wait for k8s-apps to be running ...
	I0110 01:54:17.200613   15429 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 01:54:17.200666   15429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 01:54:17.217304   15429 system_svc.go:56] duration metric: took 16.678146ms WaitForService to wait for kubelet
	I0110 01:54:17.217346   15429 kubeadm.go:587] duration metric: took 14.043693804s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 01:54:17.217370   15429 node_conditions.go:102] verifying NodePressure condition ...
	I0110 01:54:17.220292   15429 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 01:54:17.220317   15429 node_conditions.go:123] node cpu capacity is 8
	I0110 01:54:17.220336   15429 node_conditions.go:105] duration metric: took 2.959599ms to run NodePressure ...
	I0110 01:54:17.220349   15429 start.go:242] waiting for startup goroutines ...
	I0110 01:54:17.270720   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:17.270951   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:17.314102   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:17.475955   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:17.770979   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:17.771041   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:17.813560   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:17.974861   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:18.270647   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:18.270824   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:18.313000   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:18.475616   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:18.772854   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:18.773309   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:18.818131   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:18.976264   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:19.271051   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:19.271051   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:19.313030   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:19.475441   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:19.770603   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:19.771094   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:19.813150   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:19.975542   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:20.271415   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:20.271520   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:20.313404   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:20.474689   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:20.771210   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:20.771326   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:20.813582   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:20.975163   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:21.270843   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:21.271131   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:21.313270   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:21.475638   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:21.770968   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:21.771117   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:21.813928   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:21.976259   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:22.270034   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:22.270750   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:22.313148   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:22.475266   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:22.775694   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:22.776197   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:22.814576   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:22.975607   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:23.271648   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:23.273153   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:23.313270   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:23.475973   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:23.770839   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:23.770908   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:23.814032   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:23.975604   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:24.271066   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:24.271129   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:24.312750   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:24.474900   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:24.843230   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:24.843245   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:24.843403   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:24.975855   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:25.270644   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:25.270815   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:25.312557   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:25.474644   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:25.770937   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:25.770976   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:25.814369   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:25.975871   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:26.271387   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:26.271391   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:26.312856   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:26.475288   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:26.769748   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:26.770488   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:26.813363   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:26.974486   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:27.271348   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:27.271581   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:27.313673   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:27.475424   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:27.770672   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:27.770917   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:27.813912   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:27.975477   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:28.270554   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:28.271027   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:28.312728   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:28.475050   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:28.770119   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:28.770518   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:28.819082   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:28.980440   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:29.270665   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:29.270806   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:29.313771   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:29.475185   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:29.770573   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:29.770811   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:29.814348   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:29.976041   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:30.270596   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:30.270989   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:30.314001   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:30.475459   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:30.770527   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:30.770871   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:30.813529   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:30.975124   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:31.269846   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:31.270425   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:31.312958   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:31.475343   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:31.770814   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:31.771060   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:31.871470   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:31.975953   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:32.271185   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:32.271244   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:32.317480   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:32.475842   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:32.771526   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:32.773662   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:32.813054   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:32.975695   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:33.271424   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:33.271580   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:33.313880   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:33.475077   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:33.770601   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:33.770815   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:33.813208   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:33.975464   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:34.270851   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:34.270920   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:34.371207   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:34.475567   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:34.770633   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:34.770843   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:34.813457   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:34.975511   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:35.272157   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:35.272220   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:35.312789   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:35.474852   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:35.770418   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:35.770612   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:35.813171   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:35.975538   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:36.270352   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:36.270412   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:36.312814   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:36.475584   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:36.772399   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:36.773584   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:36.814463   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:36.974809   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:37.270459   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:37.270589   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:37.313620   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:37.474781   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:37.770624   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:37.770686   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:37.813494   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:37.974855   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:38.270975   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:38.271051   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:38.312373   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:38.474610   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:38.770527   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:38.770602   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:38.813337   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:38.975145   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:39.270850   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:39.270868   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:39.371087   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:39.475580   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:39.770714   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:39.770731   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:39.813914   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:39.975483   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:40.273025   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:40.273141   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:40.313228   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:40.475258   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:40.770294   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:40.770658   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:40.813130   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:40.975216   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:41.270879   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:41.271405   15429 kapi.go:107] duration metric: took 36.503454688s to wait for kubernetes.io/minikube-addons=registry ...
	I0110 01:54:41.313807   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:41.475695   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:41.771221   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:41.813838   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:41.975802   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:42.288645   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:42.365837   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:42.606788   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:42.770912   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:42.813931   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:42.975633   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:43.270223   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:43.313658   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:43.476818   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:43.771235   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:43.813993   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:43.976270   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:44.271439   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:44.313366   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:44.475948   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:44.770973   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:44.814086   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:44.984435   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:45.270028   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:45.313870   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:45.475109   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:45.770013   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:45.813878   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:45.975979   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:46.270941   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:46.313631   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:46.475111   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:46.770131   15429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:46.812792   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:46.975238   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:47.271160   15429 kapi.go:107] duration metric: took 42.504301562s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0110 01:54:47.313703   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:47.476183   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:47.813546   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:47.975407   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:48.313800   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:48.475389   15429 kapi.go:107] duration metric: took 37.003147826s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0110 01:54:48.478652   15429 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-600454 cluster.
	I0110 01:54:48.480416   15429 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0110 01:54:48.481907   15429 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0110 01:54:48.813985   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:49.313864   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:49.813773   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:50.313260   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:50.813637   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:51.313145   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:51.813596   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:52.313977   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:52.813792   15429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:53.313139   15429 kapi.go:107] duration metric: took 48.003218074s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0110 01:54:53.315515   15429 out.go:179] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, registry-creds, inspektor-gadget, amd-gpu-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0110 01:54:53.316646   15429 addons.go:530] duration metric: took 50.142962212s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns registry-creds inspektor-gadget amd-gpu-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0110 01:54:53.316690   15429 start.go:247] waiting for cluster config update ...
	I0110 01:54:53.316713   15429 start.go:256] writing updated cluster config ...
	I0110 01:54:53.316990   15429 ssh_runner.go:195] Run: rm -f paused
	I0110 01:54:53.321808   15429 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 01:54:53.324552   15429 pod_ready.go:83] waiting for pod "coredns-7d764666f9-zhk8p" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:53.329453   15429 pod_ready.go:94] pod "coredns-7d764666f9-zhk8p" is "Ready"
	I0110 01:54:53.329477   15429 pod_ready.go:86] duration metric: took 4.903252ms for pod "coredns-7d764666f9-zhk8p" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:53.331260   15429 pod_ready.go:83] waiting for pod "etcd-addons-600454" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:53.334315   15429 pod_ready.go:94] pod "etcd-addons-600454" is "Ready"
	I0110 01:54:53.334334   15429 pod_ready.go:86] duration metric: took 3.055332ms for pod "etcd-addons-600454" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:53.335770   15429 pod_ready.go:83] waiting for pod "kube-apiserver-addons-600454" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:53.338796   15429 pod_ready.go:94] pod "kube-apiserver-addons-600454" is "Ready"
	I0110 01:54:53.338815   15429 pod_ready.go:86] duration metric: took 3.029284ms for pod "kube-apiserver-addons-600454" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:53.340480   15429 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-600454" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:53.725660   15429 pod_ready.go:94] pod "kube-controller-manager-addons-600454" is "Ready"
	I0110 01:54:53.725684   15429 pod_ready.go:86] duration metric: took 385.180775ms for pod "kube-controller-manager-addons-600454" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:53.925368   15429 pod_ready.go:83] waiting for pod "kube-proxy-n6xgk" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:54.325455   15429 pod_ready.go:94] pod "kube-proxy-n6xgk" is "Ready"
	I0110 01:54:54.325477   15429 pod_ready.go:86] duration metric: took 400.087866ms for pod "kube-proxy-n6xgk" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:54.525448   15429 pod_ready.go:83] waiting for pod "kube-scheduler-addons-600454" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:54.925640   15429 pod_ready.go:94] pod "kube-scheduler-addons-600454" is "Ready"
	I0110 01:54:54.925665   15429 pod_ready.go:86] duration metric: took 400.195612ms for pod "kube-scheduler-addons-600454" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:54:54.925676   15429 pod_ready.go:40] duration metric: took 1.603838136s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 01:54:54.970955   15429 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 01:54:54.972608   15429 out.go:179] * Done! kubectl is now configured to use "addons-600454" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.497023927Z" level=info msg="Ran pod sandbox 3cb3d626ac0f196b6487dbf02902dd3889c195d402bfbe0937d30ba63698df5c with infra container: default/registry-test/POD" id=5b34634d-0703-464d-88f9-867eff58d10a name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.760754776Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737" id=1d3bf666-506f-4ef2-baac-3929f69a7f71 name=/runtime.v1.ImageService/PullImage
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.761330562Z" level=info msg="Checking image status: busybox:stable" id=74ccbbfc-7a84-4525-846a-b67798430d27 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.761461218Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.762431891Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:latest" id=1b008fba-9b1c-4bd7-9cb2-84485a7157e0 name=/runtime.v1.ImageService/PullImage
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.762740437Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:latest\""
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.763114907Z" level=info msg="Checking image status: busybox:stable" id=f45b0957-76df-4fbd-a3f2-35934a4dbc59 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.763226872Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.766591917Z" level=info msg="Creating container: default/test-local-path/busybox" id=fb9c817b-45ba-4987-adb9-41f5dab0e9c6 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.7667169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.771829734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.772266543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.806245412Z" level=info msg="Created container b73c8420312a4372cf4166b85b5b0715f3dcde80265a6a5124355e363fa1bd4e: default/test-local-path/busybox" id=fb9c817b-45ba-4987-adb9-41f5dab0e9c6 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.806823842Z" level=info msg="Starting container: b73c8420312a4372cf4166b85b5b0715f3dcde80265a6a5124355e363fa1bd4e" id=555f9bbf-ab4a-497a-aaa4-6fb2df92e80f name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 01:55:14 addons-600454 crio[769]: time="2026-01-10T01:55:14.808724167Z" level=info msg="Started container" PID=6866 containerID=b73c8420312a4372cf4166b85b5b0715f3dcde80265a6a5124355e363fa1bd4e description=default/test-local-path/busybox id=555f9bbf-ab4a-497a-aaa4-6fb2df92e80f name=/runtime.v1.RuntimeService/StartContainer sandboxID=314bddf73c0c4164da4f80b6f2b3cf33c4e9d611125f089c8e2ba6c97bb76cf9
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.329374261Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee" id=1b008fba-9b1c-4bd7-9cb2-84485a7157e0 name=/runtime.v1.ImageService/PullImage
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.329960072Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:latest" id=9b1312da-1c4d-489f-8eb0-1173a5eec72f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.331583428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox" id=d3bafea3-d9b8-496c-b912-dfc5c088c695 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.335317746Z" level=info msg="Creating container: default/registry-test/registry-test" id=5d2e4b91-cda3-406c-a70e-7fa7a1f9bf57 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.33546863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.342962871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.343646094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.387148653Z" level=info msg="Created container 216095b0e36e93f5b1d47502a9b866713158794ff2e82373fb75eb00167cdd05: default/registry-test/registry-test" id=5d2e4b91-cda3-406c-a70e-7fa7a1f9bf57 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.387860053Z" level=info msg="Starting container: 216095b0e36e93f5b1d47502a9b866713158794ff2e82373fb75eb00167cdd05" id=0761f97c-0442-4e1f-9eed-b7822508ffee name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 01:55:15 addons-600454 crio[769]: time="2026-01-10T01:55:15.390264325Z" level=info msg="Started container" PID=6937 containerID=216095b0e36e93f5b1d47502a9b866713158794ff2e82373fb75eb00167cdd05 description=default/registry-test/registry-test id=0761f97c-0442-4e1f-9eed-b7822508ffee name=/runtime.v1.RuntimeService/StartContainer sandboxID=3cb3d626ac0f196b6487dbf02902dd3889c195d402bfbe0937d30ba63698df5c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	216095b0e36e9       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          Less than a second ago   Exited              registry-test                            0                   3cb3d626ac0f1       registry-test                                                default
	b73c8420312a4       docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737                                            1 second ago             Exited              busybox                                  0                   314bddf73c0c4       test-local-path                                              default
	f92888de60171       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            5 seconds ago            Exited              helper-pod                               0                   00f758a250bb9       helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69   local-path-storage
	92fdb11644c16       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          19 seconds ago           Running             busybox                                  0                   78d7e90c7c7e0       busybox                                                      default
	d18c5e99eeeec       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          23 seconds ago           Running             csi-snapshotter                          0                   19de50b501953       csi-hostpathplugin-9bjcs                                     kube-system
	c1f126abe0c18       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          24 seconds ago           Running             csi-provisioner                          0                   19de50b501953       csi-hostpathplugin-9bjcs                                     kube-system
	d167ffd155e8e       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            25 seconds ago           Running             liveness-probe                           0                   19de50b501953       csi-hostpathplugin-9bjcs                                     kube-system
	62f4a9ae053ab       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           26 seconds ago           Running             hostpath                                 0                   19de50b501953       csi-hostpathplugin-9bjcs                                     kube-system
	889a7309650ee       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                27 seconds ago           Running             node-driver-registrar                    0                   19de50b501953       csi-hostpathplugin-9bjcs                                     kube-system
	f4bb6aa985e5d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 27 seconds ago           Running             gcp-auth                                 0                   265fb8656bc10       gcp-auth-5bbcf684b5-2cm2g                                    gcp-auth
	4c534db480830       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             28 seconds ago           Running             controller                               0                   19188ee30f319       ingress-nginx-controller-7847b5c79c-hb252                    ingress-nginx
	fb6b3014a2bfd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            32 seconds ago           Running             gadget                                   0                   641c898f81bd3       gadget-g22r7                                                 gadget
	d90dac6fc570a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              35 seconds ago           Running             registry-proxy                           0                   97a3eabfe309c       registry-proxy-zx8d4                                         kube-system
	be34361c1e4a4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   36 seconds ago           Exited              patch                                    0                   a012488569bbd       gcp-auth-certs-patch-kzqlg                                   gcp-auth
	8e1356653be04       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     36 seconds ago           Running             amd-gpu-device-plugin                    0                   efa2c4a1c3f95       amd-gpu-device-plugin-r27zc                                  kube-system
	2e415da3430a8       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     37 seconds ago           Running             nvidia-device-plugin-ctr                 0                   619490f4c7bd1       nvidia-device-plugin-daemonset-842xc                         kube-system
	d7e22f89ede22       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      40 seconds ago           Running             volume-snapshot-controller               0                   4939fdb90e6bb       snapshot-controller-6588d87457-ts292                         kube-system
	3856a47367790       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   40 seconds ago           Running             csi-external-health-monitor-controller   0                   19de50b501953       csi-hostpathplugin-9bjcs                                     kube-system
	089c2cf8ab7c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   41 seconds ago           Exited              patch                                    0                   e46e32a9b9332       ingress-nginx-admission-patch-zrdwq                          ingress-nginx
	e8b5123fb61c9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      41 seconds ago           Running             volume-snapshot-controller               0                   679606eaf89ec       snapshot-controller-6588d87457-f4v27                         kube-system
	672dc05e847e5       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             43 seconds ago           Running             csi-attacher                             0                   ed2bbbbeab3b5       csi-hostpath-attacher-0                                      kube-system
	da427f8457d53       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   43 seconds ago           Exited              create                                   0                   55f4d9cfa5047       gcp-auth-certs-create-kb9mr                                  gcp-auth
	0eea9d80ad167       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             44 seconds ago           Running             local-path-provisioner                   0                   a6b71ea958a05       local-path-provisioner-c44bcd496-gml8l                       local-path-storage
	8a71e59f675c7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   45 seconds ago           Exited              create                                   0                   df80b86754337       ingress-nginx-admission-create-ztv9d                         ingress-nginx
	d621433be1b1a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              45 seconds ago           Running             csi-resizer                              0                   63bfa235a3911       csi-hostpath-resizer-0                                       kube-system
	07daa69676bb9       ghcr.io/manusa/yakd@sha256:45d2fe163841511e351ae36a5e434fb854a886b0d6a70cea692bd707543fd8c6                                                  46 seconds ago           Running             yakd                                     0                   9b7491f04cb3a       yakd-dashboard-7bcf5795cd-h4qs7                              yakd-dashboard
	0310535c02c60       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           49 seconds ago           Running             registry                                 0                   28c56e0135aa7       registry-788cd7d5bc-mlf8m                                    kube-system
	092d91f03277e       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               51 seconds ago           Running             cloud-spanner-emulator                   0                   fba15121ee500       cloud-spanner-emulator-5649ccbc87-7t6j7                      default
	9b436cb1f9906       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               53 seconds ago           Running             minikube-ingress-dns                     0                   29de2b5d240cf       kube-ingress-dns-minikube                                    kube-system
	3f285e15d5a29       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        58 seconds ago           Running             metrics-server                           0                   1d0bb6de60f0a       metrics-server-5778bb4788-pj8xt                              kube-system
	f721f38b4bc9f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                                             59 seconds ago           Running             coredns                                  0                   b1cbb27ade544       coredns-7d764666f9-zhk8p                                     kube-system
	a750179be8537       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             59 seconds ago           Running             storage-provisioner                      0                   0941cf836240f       storage-provisioner                                          kube-system
	27808d90b2bc2       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           About a minute ago       Running             kindnet-cni                              0                   6ea12cd7945b5       kindnet-nw7pc                                                kube-system
	c462386867f5a       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                                                             About a minute ago       Running             kube-proxy                               0                   36644c3017c78       kube-proxy-n6xgk                                             kube-system
	3c3166ab23656       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                                                             About a minute ago       Running             kube-controller-manager                  0                   dfcef7dd98198       kube-controller-manager-addons-600454                        kube-system
	288a49ac47b29       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                                                             About a minute ago       Running             kube-scheduler                           0                   7cc31b84ad15e       kube-scheduler-addons-600454                                 kube-system
	e810a82b4230f       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                                             About a minute ago       Running             etcd                                     0                   395aa33d87e79       etcd-addons-600454                                           kube-system
	6d1e5be9f6b26       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                                                             About a minute ago       Running             kube-apiserver                           0                   16d83a9a33226       kube-apiserver-addons-600454                                 kube-system
	
	
	==> coredns [f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac] <==
	[INFO] 10.244.0.18:47997 - 36729 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134533s
	[INFO] 10.244.0.18:41992 - 17058 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000072466s
	[INFO] 10.244.0.18:41992 - 17411 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000120828s
	[INFO] 10.244.0.18:45888 - 25604 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000048966s
	[INFO] 10.244.0.18:45888 - 25867 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000093519s
	[INFO] 10.244.0.18:47872 - 17924 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000061566s
	[INFO] 10.244.0.18:47872 - 17700 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000103385s
	[INFO] 10.244.0.18:34482 - 19646 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091745s
	[INFO] 10.244.0.18:34482 - 19419 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000118617s
	[INFO] 10.244.0.21:56263 - 17930 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199914s
	[INFO] 10.244.0.21:44951 - 42478 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000237215s
	[INFO] 10.244.0.21:58229 - 11992 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131213s
	[INFO] 10.244.0.21:43677 - 45571 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129277s
	[INFO] 10.244.0.21:41804 - 29740 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152795s
	[INFO] 10.244.0.21:40208 - 20905 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160917s
	[INFO] 10.244.0.21:46237 - 58732 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004586986s
	[INFO] 10.244.0.21:34553 - 6155 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005011621s
	[INFO] 10.244.0.21:55718 - 6353 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004506021s
	[INFO] 10.244.0.21:49273 - 51452 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004857294s
	[INFO] 10.244.0.21:47004 - 45802 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006240703s
	[INFO] 10.244.0.21:44331 - 3110 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006003833s
	[INFO] 10.244.0.21:58088 - 7017 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000934419s
	[INFO] 10.244.0.21:46601 - 58861 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002146269s
	[INFO] 10.244.0.25:56477 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000231593s
	[INFO] 10.244.0.25:40460 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016803s
	
	
	==> describe nodes <==
	Name:               addons-600454
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-600454
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=addons-600454
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T01_53_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-600454
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-600454"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 01:53:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-600454
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 01:55:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 01:55:08 +0000   Sat, 10 Jan 2026 01:53:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 01:55:08 +0000   Sat, 10 Jan 2026 01:53:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 01:55:08 +0000   Sat, 10 Jan 2026 01:53:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 01:55:08 +0000   Sat, 10 Jan 2026 01:54:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-600454
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                ff94d36e-6a4e-4620-a343-8df5fc603de8
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  default                     cloud-spanner-emulator-5649ccbc87-7t6j7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  default                     registry-test                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  gadget                      gadget-g22r7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  gcp-auth                    gcp-auth-5bbcf684b5-2cm2g                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-hb252    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         72s
	  kube-system                 amd-gpu-device-plugin-r27zc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 coredns-7d764666f9-zhk8p                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     73s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 csi-hostpathplugin-9bjcs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 etcd-addons-600454                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         78s
	  kube-system                 kindnet-nw7pc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-apiserver-addons-600454                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-addons-600454        200m (2%)     0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-n6xgk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-addons-600454                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 metrics-server-5778bb4788-pj8xt              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         72s
	  kube-system                 nvidia-device-plugin-daemonset-842xc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 registry-788cd7d5bc-mlf8m                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 registry-creds-567fb78d95-jkz6h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 registry-proxy-zx8d4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 snapshot-controller-6588d87457-f4v27         0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 snapshot-controller-6588d87457-ts292         0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  local-path-storage          local-path-provisioner-c44bcd496-gml8l       0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-h4qs7              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  74s   node-controller  Node addons-600454 event: Registered Node addons-600454 in Controller
	
	
	==> dmesg <==
	[Jan10 01:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001880] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378214] i8042: Warning: Keylock active
	[  +0.012673] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498024] block sda: the capability attribute has been deprecated.
	[  +0.086955] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024715] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb] <==
	{"level":"info","ts":"2026-01-10T01:53:54.641450Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T01:53:54.641545Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T01:53:54.642299Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-10T01:53:54.642324Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T01:53:54.642342Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2026-01-10T01:53:54.642353Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-10T01:53:54.642925Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T01:53:54.643383Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-600454 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T01:53:54.643406Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T01:53:54.643391Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T01:53:54.643647Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T01:53:54.643671Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T01:53:54.643677Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T01:53:54.643742Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T01:53:54.643774Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T01:53:54.643805Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T01:53:54.643899Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T01:53:54.644944Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T01:53:54.645062Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T01:53:54.647938Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2026-01-10T01:53:54.647994Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T01:54:42.363948Z","caller":"traceutil/trace.go:172","msg":"trace[695859178] transaction","detail":"{read_only:false; response_revision:1109; number_of_response:1; }","duration":"123.575683ms","start":"2026-01-10T01:54:42.240351Z","end":"2026-01-10T01:54:42.363927Z","steps":["trace[695859178] 'process raft request'  (duration: 108.140347ms)","trace[695859178] 'compare'  (duration: 15.318255ms)"],"step_count":2}
	{"level":"warn","ts":"2026-01-10T01:54:42.602767Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.510186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-10T01:54:42.602827Z","caller":"traceutil/trace.go:172","msg":"trace[21899987] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1109; }","duration":"128.583457ms","start":"2026-01-10T01:54:42.474229Z","end":"2026-01-10T01:54:42.602812Z","steps":["trace[21899987] 'agreement among raft nodes before linearized reading'  (duration: 66.257524ms)","trace[21899987] 'range keys from in-memory index tree'  (duration: 62.223331ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T01:54:42.602939Z","caller":"traceutil/trace.go:172","msg":"trace[1776710567] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"233.191393ms","start":"2026-01-10T01:54:42.369728Z","end":"2026-01-10T01:54:42.602919Z","steps":["trace[1776710567] 'process raft request'  (duration: 170.781981ms)","trace[1776710567] 'compare'  (duration: 62.283706ms)"],"step_count":2}
	
	
	==> gcp-auth [f4bb6aa985e5d7a6dfd651234c6111e16f73a5d225fda02890cdfe9fa3908688] <==
	2026/01/10 01:54:48 GCP Auth Webhook started!
	2026/01/10 01:54:55 Ready to marshal response ...
	2026/01/10 01:54:55 Ready to write response ...
	2026/01/10 01:54:55 Ready to marshal response ...
	2026/01/10 01:54:55 Ready to write response ...
	2026/01/10 01:54:55 Ready to marshal response ...
	2026/01/10 01:54:55 Ready to write response ...
	2026/01/10 01:55:09 Ready to marshal response ...
	2026/01/10 01:55:09 Ready to write response ...
	2026/01/10 01:55:09 Ready to marshal response ...
	2026/01/10 01:55:09 Ready to write response ...
	2026/01/10 01:55:14 Ready to marshal response ...
	2026/01/10 01:55:14 Ready to write response ...
	
	
	==> kernel <==
	 01:55:16 up 37 min,  0 user,  load average: 2.28, 0.91, 0.34
	Linux addons-600454 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9] <==
	I0110 01:54:05.825138       1 main.go:148] setting mtu 1500 for CNI 
	I0110 01:54:05.825162       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 01:54:05.825193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T01:54:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 01:54:06.026523       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 01:54:06.026609       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 01:54:06.026622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 01:54:06.027113       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 01:54:06.321584       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 01:54:06.321618       1 metrics.go:72] Registering metrics
	I0110 01:54:06.321714       1 controller.go:711] "Syncing nftables rules"
	I0110 01:54:16.029427       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:54:16.029506       1 main.go:301] handling current node
	I0110 01:54:26.026781       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:54:26.026829       1 main.go:301] handling current node
	I0110 01:54:36.026405       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:54:36.026454       1 main.go:301] handling current node
	I0110 01:54:46.026320       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:54:46.026383       1 main.go:301] handling current node
	I0110 01:54:56.026343       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:54:56.026397       1 main.go:301] handling current node
	I0110 01:55:06.027035       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:55:06.027078       1 main.go:301] handling current node
	I0110 01:55:16.026536       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:55:16.026584       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5] <==
	I0110 01:54:11.421160       1 alloc.go:329] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.24.121"}
	W0110 01:54:16.210385       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.24.121:443: connect: connection refused
	E0110 01:54:16.210521       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.24.121:443: connect: connection refused" logger="UnhandledError"
	W0110 01:54:16.211075       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.24.121:443: connect: connection refused
	E0110 01:54:16.211150       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.24.121:443: connect: connection refused" logger="UnhandledError"
	W0110 01:54:16.231207       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.24.121:443: connect: connection refused
	E0110 01:54:16.231244       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.24.121:443: connect: connection refused" logger="UnhandledError"
	W0110 01:54:16.233443       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.24.121:443: connect: connection refused
	E0110 01:54:16.233480       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.24.121:443: connect: connection refused" logger="UnhandledError"
	W0110 01:54:19.360185       1 handler_proxy.go:99] no RequestInfo found in the context
	E0110 01:54:19.360259       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0110 01:54:19.360201       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.175.43:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.175.43:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.175.43:443: connect: connection refused" logger="UnhandledError"
	E0110 01:54:19.362281       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.175.43:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.175.43:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.175.43:443: connect: connection refused" logger="UnhandledError"
	E0110 01:54:19.367912       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.175.43:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.175.43:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.175.43:443: connect: connection refused" logger="UnhandledError"
	E0110 01:54:19.389105       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.175.43:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.175.43:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.175.43:443: connect: connection refused" logger="UnhandledError"
	I0110 01:54:19.461237       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0110 01:54:32.300688       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0110 01:54:32.309826       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0110 01:54:32.402558       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0110 01:54:32.411218       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E0110 01:55:03.656752       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49086: use of closed network connection
	E0110 01:55:03.797119       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49122: use of closed network connection
	
	
	==> kube-controller-manager [3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de] <==
	I0110 01:54:02.280699       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.280726       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.280937       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.280978       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.281021       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.281040       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.281103       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.281123       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.281222       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.280177       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.281223       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.285204       1 range_allocator.go:433] "Set node PodCIDR" node="addons-600454" podCIDRs=["10.244.0.0/24"]
	I0110 01:54:02.286819       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 01:54:02.288151       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.378799       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:02.378819       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 01:54:02.378825       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 01:54:02.387172       1 shared_informer.go:377] "Caches are synced"
	E0110 01:54:04.424653       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I0110 01:54:17.280705       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0110 01:54:32.293715       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0110 01:54:32.293789       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 01:54:32.394227       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:32.396168       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 01:54:32.496977       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563] <==
	I0110 01:54:04.238645       1 server_linux.go:53] "Using iptables proxy"
	I0110 01:54:04.408488       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 01:54:04.509808       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:04.509847       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0110 01:54:04.510005       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 01:54:04.596120       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 01:54:04.596196       1 server_linux.go:136] "Using iptables Proxier"
	I0110 01:54:04.606424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 01:54:04.614277       1 server.go:529] "Version info" version="v1.35.0"
	I0110 01:54:04.614317       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 01:54:04.616004       1 config.go:200] "Starting service config controller"
	I0110 01:54:04.616027       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 01:54:04.616050       1 config.go:106] "Starting endpoint slice config controller"
	I0110 01:54:04.616056       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 01:54:04.616070       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 01:54:04.616084       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 01:54:04.617521       1 config.go:309] "Starting node config controller"
	I0110 01:54:04.617584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 01:54:04.617615       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 01:54:04.716346       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 01:54:04.716432       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 01:54:04.716734       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1] <==
	E0110 01:53:55.504771       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 01:53:55.504796       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 01:53:55.504799       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 01:53:55.504826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 01:53:55.504862       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 01:53:55.504964       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 01:53:55.504994       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 01:53:55.505361       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 01:53:55.505519       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 01:53:55.505741       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 01:53:56.401066       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 01:53:56.469529       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 01:53:56.486487       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 01:53:56.556870       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 01:53:56.563497       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 01:53:56.580165       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 01:53:56.612225       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 01:53:56.628780       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 01:53:56.631286       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 01:53:56.652932       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 01:53:56.656444       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 01:53:56.669111       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 01:53:56.690654       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 01:53:56.865359       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I0110 01:53:58.798584       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 01:55:09 addons-600454 kubelet[1268]: I0110 01:55:09.740621    1268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-data\") pod \"helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69\" (UID: \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\") " pod="local-path-storage/helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69"
	Jan 10 01:55:09 addons-600454 kubelet[1268]: I0110 01:55:09.740650    1268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-gcp-creds\") pod \"helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69\" (UID: \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\") " pod="local-path-storage/helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69"
	Jan 10 01:55:09 addons-600454 kubelet[1268]: I0110 01:55:09.740823    1268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/83f231d9-59fa-47d1-9d67-2f97ebace4ee-script\") pod \"helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69\" (UID: \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\") " pod="local-path-storage/helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69"
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.461840    1268 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-data\" (UniqueName: \"kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-data\") pod \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\" (UID: \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\") "
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.461942    1268 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-gcp-creds\" (UniqueName: \"kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-gcp-creds\") pod \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\" (UID: \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\") "
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.461979    1268 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-data" pod "83f231d9-59fa-47d1-9d67-2f97ebace4ee" (UID: "83f231d9-59fa-47d1-9d67-2f97ebace4ee"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.461986    1268 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/83f231d9-59fa-47d1-9d67-2f97ebace4ee-kube-api-access-ccqdr\" (UniqueName: \"kubernetes.io/projected/83f231d9-59fa-47d1-9d67-2f97ebace4ee-kube-api-access-ccqdr\") pod \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\" (UID: \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\") "
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.462050    1268 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-gcp-creds" pod "83f231d9-59fa-47d1-9d67-2f97ebace4ee" (UID: "83f231d9-59fa-47d1-9d67-2f97ebace4ee"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.462085    1268 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/83f231d9-59fa-47d1-9d67-2f97ebace4ee-script\" (UniqueName: \"kubernetes.io/configmap/83f231d9-59fa-47d1-9d67-2f97ebace4ee-script\") pod \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\" (UID: \"83f231d9-59fa-47d1-9d67-2f97ebace4ee\") "
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.462230    1268 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-data\") on node \"addons-600454\" DevicePath \"\""
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.462249    1268 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/83f231d9-59fa-47d1-9d67-2f97ebace4ee-gcp-creds\") on node \"addons-600454\" DevicePath \"\""
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.462428    1268 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83f231d9-59fa-47d1-9d67-2f97ebace4ee-script" pod "83f231d9-59fa-47d1-9d67-2f97ebace4ee" (UID: "83f231d9-59fa-47d1-9d67-2f97ebace4ee"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.463993    1268 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f231d9-59fa-47d1-9d67-2f97ebace4ee-kube-api-access-ccqdr" pod "83f231d9-59fa-47d1-9d67-2f97ebace4ee" (UID: "83f231d9-59fa-47d1-9d67-2f97ebace4ee"). InnerVolumeSpecName "kube-api-access-ccqdr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.562777    1268 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccqdr\" (UniqueName: \"kubernetes.io/projected/83f231d9-59fa-47d1-9d67-2f97ebace4ee-kube-api-access-ccqdr\") on node \"addons-600454\" DevicePath \"\""
	Jan 10 01:55:12 addons-600454 kubelet[1268]: I0110 01:55:12.562819    1268 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/83f231d9-59fa-47d1-9d67-2f97ebace4ee-script\") on node \"addons-600454\" DevicePath \"\""
	Jan 10 01:55:13 addons-600454 kubelet[1268]: I0110 01:55:13.341052    1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00f758a250bb9cafc8e2ed296bbe51805b6f7ab44ea802f1f68134c00fc25bd8"
	Jan 10 01:55:13 addons-600454 kubelet[1268]: E0110 01:55:13.342502    1268 status_manager.go:1045] "Failed to get status for pod" err="pods \"helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69\" is forbidden: User \"system:node:addons-600454\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-600454' and this object" podUID="83f231d9-59fa-47d1-9d67-2f97ebace4ee" pod="local-path-storage/helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69"
	Jan 10 01:55:13 addons-600454 kubelet[1268]: E0110 01:55:13.549430    1268 status_manager.go:1045] "Failed to get status for pod" err="pods \"helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69\" is forbidden: User \"system:node:addons-600454\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-600454' and this object" podUID="83f231d9-59fa-47d1-9d67-2f97ebace4ee" pod="local-path-storage/helper-pod-create-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69"
	Jan 10 01:55:13 addons-600454 kubelet[1268]: I0110 01:55:13.668617    1268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69\" (UniqueName: \"kubernetes.io/host-path/017c55fd-f407-48bd-a9ad-cc6a8624588f-pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69\") pod \"test-local-path\" (UID: \"017c55fd-f407-48bd-a9ad-cc6a8624588f\") " pod="default/test-local-path"
	Jan 10 01:55:13 addons-600454 kubelet[1268]: I0110 01:55:13.668658    1268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c94q\" (UniqueName: \"kubernetes.io/projected/017c55fd-f407-48bd-a9ad-cc6a8624588f-kube-api-access-8c94q\") pod \"test-local-path\" (UID: \"017c55fd-f407-48bd-a9ad-cc6a8624588f\") " pod="default/test-local-path"
	Jan 10 01:55:13 addons-600454 kubelet[1268]: I0110 01:55:13.668681    1268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/017c55fd-f407-48bd-a9ad-cc6a8624588f-gcp-creds\") pod \"test-local-path\" (UID: \"017c55fd-f407-48bd-a9ad-cc6a8624588f\") " pod="default/test-local-path"
	Jan 10 01:55:14 addons-600454 kubelet[1268]: I0110 01:55:14.019662    1268 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="497a5650-7690-4c3e-9fb3-29ae92ea6cc1" path="/var/lib/kubelet/pods/497a5650-7690-4c3e-9fb3-29ae92ea6cc1/volumes"
	Jan 10 01:55:14 addons-600454 kubelet[1268]: I0110 01:55:14.020265    1268 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="83f231d9-59fa-47d1-9d67-2f97ebace4ee" path="/var/lib/kubelet/pods/83f231d9-59fa-47d1-9d67-2f97ebace4ee/volumes"
	Jan 10 01:55:14 addons-600454 kubelet[1268]: I0110 01:55:14.271998    1268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmcn\" (UniqueName: \"kubernetes.io/projected/cb1898e8-0272-409a-98f9-712df2dd18c9-kube-api-access-ghmcn\") pod \"registry-test\" (UID: \"cb1898e8-0272-409a-98f9-712df2dd18c9\") " pod="default/registry-test"
	Jan 10 01:55:14 addons-600454 kubelet[1268]: I0110 01:55:14.272042    1268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cb1898e8-0272-409a-98f9-712df2dd18c9-gcp-creds\") pod \"registry-test\" (UID: \"cb1898e8-0272-409a-98f9-712df2dd18c9\") " pod="default/registry-test"
	
	
	==> storage-provisioner [a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff] <==
	W0110 01:54:51.003024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:54:53.005851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:54:53.010255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:54:55.013027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:54:55.017002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:54:57.020159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:54:57.023273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:54:59.025838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:54:59.029307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:01.032036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:01.036268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:03.038991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:03.042420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:05.044840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:05.050048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:07.052926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:07.055952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:09.059203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:09.063588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:11.065769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:11.069075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:13.071189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:13.074851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:15.077783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:55:15.081663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-600454 -n addons-600454
helpers_test.go:270: (dbg) Run:  kubectl --context addons-600454 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: registry-test test-local-path ingress-nginx-admission-create-ztv9d ingress-nginx-admission-patch-zrdwq registry-creds-567fb78d95-jkz6h
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-600454 describe pod registry-test test-local-path ingress-nginx-admission-create-ztv9d ingress-nginx-admission-patch-zrdwq registry-creds-567fb78d95-jkz6h
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-600454 describe pod registry-test test-local-path ingress-nginx-admission-create-ztv9d ingress-nginx-admission-patch-zrdwq registry-creds-567fb78d95-jkz6h: exit status 1 (78.456223ms)

                                                
                                                
-- stdout --
	Name:             registry-test
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-600454/192.168.49.2
	Start Time:       Sat, 10 Jan 2026 01:55:14 +0000
	Labels:           run=registry-test
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  registry-test:
	    Container ID:  cri-o://216095b0e36e93f5b1d47502a9b866713158794ff2e82373fb75eb00167cdd05
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 Jan 2026 01:55:15 +0000
	      Finished:     Sat, 10 Jan 2026 01:55:15 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ghmcn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ghmcn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/registry-test to addons-600454
	  Normal  Pulling    2s    kubelet            spec.containers{registry-test}: Pulling image "gcr.io/k8s-minikube/busybox"
	  Normal  Pulled     1s    kubelet            spec.containers{registry-test}: Successfully pulled image "gcr.io/k8s-minikube/busybox" in 568ms (832ms including waiting). Image size: 1462480 bytes.
	  Normal  Created    1s    kubelet            spec.containers{registry-test}: Container created
	  Normal  Started    1s    kubelet            spec.containers{registry-test}: Container started
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-600454/192.168.49.2
	Start Time:       Sat, 10 Jan 2026 01:55:13 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  busybox:
	    Container ID:  cri-o://b73c8420312a4372cf4166b85b5b0715f3dcde80265a6a5124355e363fa1bd4e
	    Image:         busybox:stable
	    Image ID:      docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 Jan 2026 01:55:14 +0000
	      Finished:     Sat, 10 Jan 2026 01:55:14 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8c94q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-8c94q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/test-local-path to addons-600454
	  Normal  Pulling    3s    kubelet            spec.containers{busybox}: Pulling image "busybox:stable"
	  Normal  Pulled     2s    kubelet            spec.containers{busybox}: Successfully pulled image "busybox:stable" in 892ms (892ms including waiting). Image size: 4670414 bytes.
	  Normal  Created    2s    kubelet            spec.containers{busybox}: Container created
	  Normal  Started    2s    kubelet            spec.containers{busybox}: Container started

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ztv9d" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zrdwq" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-jkz6h" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-600454 describe pod registry-test test-local-path ingress-nginx-admission-create-ztv9d ingress-nginx-admission-patch-zrdwq registry-creds-567fb78d95-jkz6h: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable headlamp --alsologtostderr -v=1: exit status 11 (244.710375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:17.019071   25424 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:17.019388   25424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:17.019397   25424 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:17.019401   25424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:17.019574   25424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:17.019835   25424 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:17.020159   25424 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:17.020175   25424 addons.go:622] checking whether the cluster is paused
	I0110 01:55:17.020257   25424 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:17.020271   25424 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:17.020634   25424 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:17.039355   25424 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:17.039417   25424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:17.056773   25424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:17.153141   25424 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:17.153207   25424 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:17.183288   25424 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:17.183313   25424 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:17.183317   25424 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:17.183320   25424 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:17.183323   25424 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:17.183327   25424 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:17.183330   25424 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:17.183333   25424 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:17.183335   25424 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:17.183344   25424 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:17.183350   25424 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:17.183353   25424 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:17.183355   25424 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:17.183358   25424 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:17.183361   25424 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:17.183369   25424 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:17.183372   25424 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:17.183376   25424 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:17.183383   25424 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:17.183386   25424 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:17.183389   25424 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:17.183392   25424 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:17.183395   25424 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:17.183397   25424 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:17.183400   25424 cri.go:96] found id: ""
	I0110 01:55:17.183449   25424 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:17.197583   25424 out.go:203] 
	W0110 01:55:17.198690   25424 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:17.198711   25424 out.go:285] * 
	* 
	W0110 01:55:17.199449   25424 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:17.200453   25424 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-7t6j7" [091ddc70-3842-439f-88ce-2f45724dc837] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002056092s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (256.649335ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:09.102563   23849 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:09.102685   23849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:09.102695   23849 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:09.102702   23849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:09.102994   23849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:09.103297   23849 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:09.103604   23849 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:09.103618   23849 addons.go:622] checking whether the cluster is paused
	I0110 01:55:09.103733   23849 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:09.103751   23849 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:09.104281   23849 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:09.124038   23849 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:09.124101   23849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:09.144874   23849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:09.239635   23849 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:09.239720   23849 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:09.273088   23849 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:09.273120   23849 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:09.273126   23849 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:09.273131   23849 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:09.273136   23849 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:09.273142   23849 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:09.273147   23849 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:09.273151   23849 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:09.273155   23849 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:09.273170   23849 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:09.273179   23849 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:09.273183   23849 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:09.273187   23849 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:09.273192   23849 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:09.273196   23849 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:09.273206   23849 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:09.273212   23849 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:09.273218   23849 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:09.273222   23849 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:09.273226   23849 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:09.273234   23849 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:09.273245   23849 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:09.273249   23849 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:09.273253   23849 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:09.273257   23849 cri.go:96] found id: ""
	I0110 01:55:09.273306   23849 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:09.287755   23849 out.go:203] 
	W0110 01:55:09.289051   23849 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:09.289070   23849 out.go:285] * 
	* 
	W0110 01:55:09.289946   23849 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:09.290969   23849 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-600454 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-600454 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-600454 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [017c55fd-f407-48bd-a9ad-cc6a8624588f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [017c55fd-f407-48bd-a9ad-cc6a8624588f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [017c55fd-f407-48bd-a9ad-cc6a8624588f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003208525s
addons_test.go:969: (dbg) Run:  kubectl --context addons-600454 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 ssh "cat /opt/local-path-provisioner/pvc-863f22cb-5e9b-4797-a7c4-8a52d7a80e69_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-600454 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-600454 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (261.220078ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:17.229068   25492 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:17.229218   25492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:17.229230   25492 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:17.229236   25492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:17.229458   25492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:17.229760   25492 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:17.230097   25492 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:17.230113   25492 addons.go:622] checking whether the cluster is paused
	I0110 01:55:17.230192   25492 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:17.230203   25492 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:17.230554   25492 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:17.249958   25492 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:17.250014   25492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:17.269666   25492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:17.362103   25492 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:17.362186   25492 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:17.398840   25492 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:17.398980   25492 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:17.398994   25492 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:17.398999   25492 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:17.399003   25492 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:17.399008   25492 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:17.399013   25492 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:17.399017   25492 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:17.399021   25492 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:17.399029   25492 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:17.399036   25492 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:17.399040   25492 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:17.399045   25492 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:17.399049   25492 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:17.399054   25492 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:17.399060   25492 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:17.399065   25492 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:17.399076   25492 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:17.399080   25492 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:17.399084   25492 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:17.399091   25492 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:17.399095   25492 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:17.399100   25492 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:17.399105   25492 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:17.399109   25492 cri.go:96] found id: ""
	I0110 01:55:17.399154   25492 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:17.420554   25492 out.go:203] 
	W0110 01:55:17.422656   25492 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:17.422673   25492 out.go:285] * 
	* 
	W0110 01:55:17.423754   25492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:17.426746   25492 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-842xc" [56576535-28b2-4154-90d5-fae5922238e3] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00374692s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (260.692371ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:15.366117   24628 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:15.366263   24628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:15.366271   24628 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:15.366277   24628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:15.366597   24628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:15.366937   24628 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:15.367425   24628 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:15.367470   24628 addons.go:622] checking whether the cluster is paused
	I0110 01:55:15.367665   24628 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:15.367709   24628 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:15.368302   24628 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:15.393225   24628 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:15.393761   24628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:15.415308   24628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:15.508131   24628 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:15.508216   24628 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:15.539005   24628 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:15.539031   24628 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:15.539035   24628 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:15.539039   24628 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:15.539042   24628 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:15.539046   24628 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:15.539049   24628 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:15.539052   24628 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:15.539055   24628 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:15.539063   24628 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:15.539066   24628 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:15.539068   24628 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:15.539071   24628 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:15.539074   24628 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:15.539077   24628 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:15.539094   24628 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:15.539099   24628 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:15.539103   24628 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:15.539106   24628 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:15.539109   24628 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:15.539114   24628 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:15.539120   24628 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:15.539123   24628 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:15.539125   24628 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:15.539130   24628 cri.go:96] found id: ""
	I0110 01:55:15.539191   24628 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:15.553177   24628 out.go:203] 
	W0110 01:55:15.554592   24628 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:15.554613   24628 out.go:285] * 
	* 
	W0110 01:55:15.555607   24628 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:15.556834   24628 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-h4qs7" [2bd9f00c-1c61-4d84-aa78-a2b378290247] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003535644s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable yakd --alsologtostderr -v=1: exit status 11 (253.339152ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:09.102230   23848 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:09.102381   23848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:09.102394   23848 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:09.102400   23848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:09.102662   23848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:09.103010   23848 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:09.103465   23848 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:09.103486   23848 addons.go:622] checking whether the cluster is paused
	I0110 01:55:09.103622   23848 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:09.103633   23848 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:09.104184   23848 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:09.123698   23848 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:09.123749   23848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:09.143026   23848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:09.236534   23848 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:09.236622   23848 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:09.271865   23848 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:09.271912   23848 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:09.271919   23848 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:09.271924   23848 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:09.271928   23848 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:09.271933   23848 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:09.271937   23848 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:09.271941   23848 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:09.271945   23848 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:09.271955   23848 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:09.271959   23848 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:09.271963   23848 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:09.271967   23848 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:09.271971   23848 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:09.271975   23848 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:09.271991   23848 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:09.271997   23848 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:09.272004   23848 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:09.272009   23848 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:09.272013   23848 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:09.272020   23848 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:09.272024   23848 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:09.272029   23848 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:09.272033   23848 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:09.272037   23848 cri.go:96] found id: ""
	I0110 01:55:09.272084   23848 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:09.286768   23848 out.go:203] 
	W0110 01:55:09.288420   23848 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:09.288449   23848 out.go:285] * 
	* 
	W0110 01:55:09.289256   23848 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:09.290290   23848 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-r27zc" [61f0da90-77f5-49e4-ab9f-eda90d4e04ea] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003017282s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-600454 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600454 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (260.280445ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:55:09.100327   23847 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:55:09.100703   23847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:09.100715   23847 out.go:374] Setting ErrFile to fd 2...
	I0110 01:55:09.100719   23847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:55:09.100988   23847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:55:09.101806   23847 mustload.go:66] Loading cluster: addons-600454
	I0110 01:55:09.103074   23847 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:09.103092   23847 addons.go:622] checking whether the cluster is paused
	I0110 01:55:09.103251   23847 config.go:182] Loaded profile config "addons-600454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:55:09.103273   23847 host.go:66] Checking if "addons-600454" exists ...
	I0110 01:55:09.103862   23847 cli_runner.go:164] Run: docker container inspect addons-600454 --format={{.State.Status}}
	I0110 01:55:09.123772   23847 ssh_runner.go:195] Run: systemctl --version
	I0110 01:55:09.123845   23847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-600454
	I0110 01:55:09.144172   23847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/addons-600454/id_rsa Username:docker}
	I0110 01:55:09.239330   23847 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:55:09.239410   23847 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:55:09.274577   23847 cri.go:96] found id: "d18c5e99eeeec9e5bbaa8d31a32427af9c7bc7e800949de68f11e5b6854be820"
	I0110 01:55:09.274595   23847 cri.go:96] found id: "c1f126abe0c184c86447d05acc1af39fad1f2dea4cd187e6f7a0b4318a5ae980"
	I0110 01:55:09.274601   23847 cri.go:96] found id: "d167ffd155e8eecc968853ebd2264a9eaef38dd5e52f666974e792f7f784c9db"
	I0110 01:55:09.274605   23847 cri.go:96] found id: "62f4a9ae053ab92c851e38794ddc671bff52e9fc43b6e6a2f80d9b8837035301"
	I0110 01:55:09.274610   23847 cri.go:96] found id: "889a7309650ee995aa717dadb8d5325ab9207e4f7a03464cf949c4c08096688b"
	I0110 01:55:09.274615   23847 cri.go:96] found id: "d90dac6fc570a007b94233b926f5af162f9dbe8642555e07676691d947b48f28"
	I0110 01:55:09.274620   23847 cri.go:96] found id: "8e1356653be04d4afd2166979bf15fe2297d858b610683b25c2127a690c47d88"
	I0110 01:55:09.274624   23847 cri.go:96] found id: "2e415da3430a899a9946907c2f45832bb4fabeadf6359a74df9eec88963d6ead"
	I0110 01:55:09.274629   23847 cri.go:96] found id: "d7e22f89ede22f0be23e758dfe2bb6de554d3e3cd9cb16def43e2ed2cebc2c4e"
	I0110 01:55:09.274645   23847 cri.go:96] found id: "3856a4736779010c3ad95ec7ce662c58ad795d51a548d117bd7817f355b2e9b9"
	I0110 01:55:09.274650   23847 cri.go:96] found id: "e8b5123fb61c9cf82bc69f6055620fb5fcf202a601b30da56afeebf6047bb342"
	I0110 01:55:09.274654   23847 cri.go:96] found id: "672dc05e847e50f9a5b68f2a11312907cc98e2bc8b61c23b34d307f52a00adcf"
	I0110 01:55:09.274659   23847 cri.go:96] found id: "d621433be1b1a2421d26df636dedc85c95faaf90b9bd0c3816bdd9a5bee89d23"
	I0110 01:55:09.274664   23847 cri.go:96] found id: "0310535c02c60eb448245f2e1aeb0c7bb1235f6d3c07dbc5671f15e9ccc0d338"
	I0110 01:55:09.274668   23847 cri.go:96] found id: "9b436cb1f9906a5bcdecbd23347b572cd6a0b351030bbe21ea29477969daa285"
	I0110 01:55:09.274676   23847 cri.go:96] found id: "3f285e15d5a2944dc7621d62fcf0a1a953a8bcffe65b9608979eb3096ddba956"
	I0110 01:55:09.274681   23847 cri.go:96] found id: "f721f38b4bc9f4f9592b7aed721c49b0caba955e2bdbbebadf8190f986f548ac"
	I0110 01:55:09.274686   23847 cri.go:96] found id: "a750179be853739fe70f7a51c3b15fd7104c72af694675ae400ac8530a1d2cff"
	I0110 01:55:09.274691   23847 cri.go:96] found id: "27808d90b2bc2ef3d2581f89b7c3d51a5b695c0b50fe7271113d408d3bd00ba9"
	I0110 01:55:09.274703   23847 cri.go:96] found id: "c462386867f5a4cfef291325472d2a96c8af7d106c29410bebd3f1a80c918563"
	I0110 01:55:09.274706   23847 cri.go:96] found id: "3c3166ab236563044d46cf8156deb99e52d585b24e7f4cae9fbd11ea32a393de"
	I0110 01:55:09.274714   23847 cri.go:96] found id: "288a49ac47b291a212b2fcb20e5ff1c853d4fb8825241bb665924c01b8908cf1"
	I0110 01:55:09.274718   23847 cri.go:96] found id: "e810a82b4230fe9bf744a0431721854b02f0941e5a69b8fea0860179aedb76fb"
	I0110 01:55:09.274726   23847 cri.go:96] found id: "6d1e5be9f6b264242bbafe30ad2c9047a07668cc0d2e72f1c705f785c5bf04d5"
	I0110 01:55:09.274731   23847 cri.go:96] found id: ""
	I0110 01:55:09.274773   23847 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:55:09.288366   23847 out.go:203] 
	W0110 01:55:09.289719   23847 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:55:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:55:09.289742   23847 out.go:285] * 
	* 
	W0110 01:55:09.290389   23847 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:55:09.291611   23847 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-600454 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-319215 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-319215 --output=json --user=testUser: exit status 80 (1.792754386s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"35abf482-4b47-49bc-a456-fc41ae68c378","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-319215 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a83a696b-4905-4ffc-ae8a-1498d8ecac2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-10T02:07:13Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"24fc67d0-7f91-4d1a-9a6c-b37777200153","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-319215 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.79s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.12s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-319215 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-319215 --output=json --user=testUser: exit status 80 (2.115273386s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ca7c667a-5e03-4a67-bb9e-db276742237a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-319215 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7cd132ca-8df7-4ba1-a599-a2a3eb77301e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-10T02:07:15Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"9a502c84-849e-4884-beda-8026d1b05c43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-319215 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.12s)

                                                
                                    
x
+
TestPause/serial/Pause (5.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-538591 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-538591 --alsologtostderr -v=5: exit status 80 (2.364764385s)

                                                
                                                
-- stdout --
	* Pausing node pause-538591 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:18:10.879917  197541 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:18:10.880169  197541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:18:10.880178  197541 out.go:374] Setting ErrFile to fd 2...
	I0110 02:18:10.880182  197541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:18:10.880429  197541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:18:10.880751  197541 out.go:368] Setting JSON to false
	I0110 02:18:10.880771  197541 mustload.go:66] Loading cluster: pause-538591
	I0110 02:18:10.881197  197541 config.go:182] Loaded profile config "pause-538591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:18:10.881625  197541 cli_runner.go:164] Run: docker container inspect pause-538591 --format={{.State.Status}}
	I0110 02:18:10.899812  197541 host.go:66] Checking if "pause-538591" exists ...
	I0110 02:18:10.900103  197541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:18:10.958028  197541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 02:18:10.948152718 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:18:10.958602  197541 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-538591 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:18:10.960430  197541 out.go:179] * Pausing node pause-538591 ... 
	I0110 02:18:10.961425  197541 host.go:66] Checking if "pause-538591" exists ...
	I0110 02:18:10.961646  197541 ssh_runner.go:195] Run: systemctl --version
	I0110 02:18:10.961683  197541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:10.978967  197541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:11.070393  197541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:18:11.082317  197541 pause.go:52] kubelet running: true
	I0110 02:18:11.082410  197541 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:18:11.217444  197541 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:18:11.217533  197541 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:18:11.280796  197541 cri.go:96] found id: "0cd6b46141d3596075a2e35864f26a9594c5564c9062d010092d222220a53327"
	I0110 02:18:11.280816  197541 cri.go:96] found id: "79522e27c31af219993fe174e5a5620c061de48834cf7aa6d5ae97a3c3dad960"
	I0110 02:18:11.280821  197541 cri.go:96] found id: "fb80d4f2e5233ecb8291e4c8170a081344a456dc06680eb325439cb4143a57f6"
	I0110 02:18:11.280824  197541 cri.go:96] found id: "0c62c16a7490e6c973be02b02136b8349d9805422c1a764e4203f3fb440bf8f8"
	I0110 02:18:11.280827  197541 cri.go:96] found id: "cca61452faa7940827281bf294cac47345ecab8de8eea2746c864d19445165f8"
	I0110 02:18:11.280830  197541 cri.go:96] found id: "2fe6e76f66283939a29885bc06387085aef282e01e58f5573ff55899e4308598"
	I0110 02:18:11.280833  197541 cri.go:96] found id: "912bde97aebf1616ad9e4d63d56ffd6f1612b5c2adfa2f3a77dc4d260380497b"
	I0110 02:18:11.280836  197541 cri.go:96] found id: ""
	I0110 02:18:11.280871  197541 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:18:11.292205  197541 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:18:11Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:18:11.598733  197541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:18:11.611612  197541 pause.go:52] kubelet running: false
	I0110 02:18:11.611665  197541 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:18:11.714997  197541 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:18:11.715070  197541 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:18:11.779266  197541 cri.go:96] found id: "0cd6b46141d3596075a2e35864f26a9594c5564c9062d010092d222220a53327"
	I0110 02:18:11.779289  197541 cri.go:96] found id: "79522e27c31af219993fe174e5a5620c061de48834cf7aa6d5ae97a3c3dad960"
	I0110 02:18:11.779293  197541 cri.go:96] found id: "fb80d4f2e5233ecb8291e4c8170a081344a456dc06680eb325439cb4143a57f6"
	I0110 02:18:11.779297  197541 cri.go:96] found id: "0c62c16a7490e6c973be02b02136b8349d9805422c1a764e4203f3fb440bf8f8"
	I0110 02:18:11.779300  197541 cri.go:96] found id: "cca61452faa7940827281bf294cac47345ecab8de8eea2746c864d19445165f8"
	I0110 02:18:11.779304  197541 cri.go:96] found id: "2fe6e76f66283939a29885bc06387085aef282e01e58f5573ff55899e4308598"
	I0110 02:18:11.779318  197541 cri.go:96] found id: "912bde97aebf1616ad9e4d63d56ffd6f1612b5c2adfa2f3a77dc4d260380497b"
	I0110 02:18:11.779323  197541 cri.go:96] found id: ""
	I0110 02:18:11.779366  197541 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:18:12.112152  197541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:18:12.125716  197541 pause.go:52] kubelet running: false
	I0110 02:18:12.125791  197541 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:18:12.232995  197541 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:18:12.233080  197541 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:18:12.295844  197541 cri.go:96] found id: "0cd6b46141d3596075a2e35864f26a9594c5564c9062d010092d222220a53327"
	I0110 02:18:12.295867  197541 cri.go:96] found id: "79522e27c31af219993fe174e5a5620c061de48834cf7aa6d5ae97a3c3dad960"
	I0110 02:18:12.295875  197541 cri.go:96] found id: "fb80d4f2e5233ecb8291e4c8170a081344a456dc06680eb325439cb4143a57f6"
	I0110 02:18:12.295881  197541 cri.go:96] found id: "0c62c16a7490e6c973be02b02136b8349d9805422c1a764e4203f3fb440bf8f8"
	I0110 02:18:12.295922  197541 cri.go:96] found id: "cca61452faa7940827281bf294cac47345ecab8de8eea2746c864d19445165f8"
	I0110 02:18:12.295931  197541 cri.go:96] found id: "2fe6e76f66283939a29885bc06387085aef282e01e58f5573ff55899e4308598"
	I0110 02:18:12.295940  197541 cri.go:96] found id: "912bde97aebf1616ad9e4d63d56ffd6f1612b5c2adfa2f3a77dc4d260380497b"
	I0110 02:18:12.295943  197541 cri.go:96] found id: ""
	I0110 02:18:12.295986  197541 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:18:12.961953  197541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:18:12.974673  197541 pause.go:52] kubelet running: false
	I0110 02:18:12.974745  197541 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:18:13.089362  197541 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:18:13.089445  197541 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:18:13.165967  197541 cri.go:96] found id: "0cd6b46141d3596075a2e35864f26a9594c5564c9062d010092d222220a53327"
	I0110 02:18:13.165987  197541 cri.go:96] found id: "79522e27c31af219993fe174e5a5620c061de48834cf7aa6d5ae97a3c3dad960"
	I0110 02:18:13.165991  197541 cri.go:96] found id: "fb80d4f2e5233ecb8291e4c8170a081344a456dc06680eb325439cb4143a57f6"
	I0110 02:18:13.165995  197541 cri.go:96] found id: "0c62c16a7490e6c973be02b02136b8349d9805422c1a764e4203f3fb440bf8f8"
	I0110 02:18:13.166000  197541 cri.go:96] found id: "cca61452faa7940827281bf294cac47345ecab8de8eea2746c864d19445165f8"
	I0110 02:18:13.166005  197541 cri.go:96] found id: "2fe6e76f66283939a29885bc06387085aef282e01e58f5573ff55899e4308598"
	I0110 02:18:13.166009  197541 cri.go:96] found id: "912bde97aebf1616ad9e4d63d56ffd6f1612b5c2adfa2f3a77dc4d260380497b"
	I0110 02:18:13.166013  197541 cri.go:96] found id: ""
	I0110 02:18:13.166058  197541 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:18:13.182999  197541 out.go:203] 
	W0110 02:18:13.184084  197541 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:18:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:18:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:18:13.184103  197541 out.go:285] * 
	* 
	W0110 02:18:13.185721  197541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:18:13.186978  197541 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-538591 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-538591
helpers_test.go:244: (dbg) docker inspect pause-538591:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92",
	        "Created": "2026-01-10T02:17:28.899977522Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 187468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:17:28.930713709Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92/hostname",
	        "HostsPath": "/var/lib/docker/containers/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92/hosts",
	        "LogPath": "/var/lib/docker/containers/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92-json.log",
	        "Name": "/pause-538591",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-538591:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-538591",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92",
	                "LowerDir": "/var/lib/docker/overlay2/2c19f595a2117f6d6b6a5feb5054902e8090ef692ab502c61cab0ba36b1f2795-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c19f595a2117f6d6b6a5feb5054902e8090ef692ab502c61cab0ba36b1f2795/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c19f595a2117f6d6b6a5feb5054902e8090ef692ab502c61cab0ba36b1f2795/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c19f595a2117f6d6b6a5feb5054902e8090ef692ab502c61cab0ba36b1f2795/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-538591",
	                "Source": "/var/lib/docker/volumes/pause-538591/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-538591",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-538591",
	                "name.minikube.sigs.k8s.io": "pause-538591",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d9069ac7cb5668c10a248a42d759b78ad193561794d7129ea4d4ca666561c147",
	            "SandboxKey": "/var/run/docker/netns/d9069ac7cb56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32994"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32993"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-538591": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "36808e08cac5f8c938bdeaf4b758395d8a3a1672776fb718c6d682efc7e7a7b8",
	                    "EndpointID": "e04263f73ce86b6ce8087388c985a75ccdc872d7e238998b84130cd15a3a9f71",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1e:5d:07:0a:99:82",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-538591",
	                        "2260504e62d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-538591 -n pause-538591
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-538591 -n pause-538591: exit status 2 (347.337945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-538591 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-930407 --cancel-scheduled                                                                                  │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:15 UTC │ 10 Jan 26 02:15 UTC │
	│ stop    │ -p scheduled-stop-930407 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:15 UTC │                     │
	│ stop    │ -p scheduled-stop-930407 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:15 UTC │                     │
	│ stop    │ -p scheduled-stop-930407 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:15 UTC │ 10 Jan 26 02:15 UTC │
	│ delete  │ -p scheduled-stop-930407                                                                                                     │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:16 UTC │
	│ start   │ -p insufficient-storage-051762 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio             │ insufficient-storage-051762 │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │                     │
	│ delete  │ -p insufficient-storage-051762                                                                                               │ insufficient-storage-051762 │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:16 UTC │
	│ start   │ -p offline-crio-092866 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio            │ offline-crio-092866         │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p test-preload-107034 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio │ test-preload-107034         │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p stopped-upgrade-116273 --memory=3072 --vm-driver=docker  --container-runtime=crio                                         │ stopped-upgrade-116273      │ jenkins │ v1.35.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p running-upgrade-138757 --memory=3072 --vm-driver=docker  --container-runtime=crio                                         │ running-upgrade-138757      │ jenkins │ v1.35.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p running-upgrade-138757 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                     │ running-upgrade-138757      │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │                     │
	│ stop    │ stopped-upgrade-116273 stop                                                                                                  │ stopped-upgrade-116273      │ jenkins │ v1.35.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p stopped-upgrade-116273 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                     │ stopped-upgrade-116273      │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ delete  │ -p offline-crio-092866                                                                                                       │ offline-crio-092866         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ image   │ test-preload-107034 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                  │ test-preload-107034         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p pause-538591 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                    │ pause-538591                │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:18 UTC │
	│ stop    │ -p test-preload-107034                                                                                                       │ test-preload-107034         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p test-preload-107034 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio           │ test-preload-107034         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │                     │
	│ delete  │ -p stopped-upgrade-116273                                                                                                    │ stopped-upgrade-116273      │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p NoKubernetes-731674 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                │ NoKubernetes-731674         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │                     │
	│ start   │ -p NoKubernetes-731674 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                        │ NoKubernetes-731674         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p NoKubernetes-731674 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio        │ NoKubernetes-731674         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │                     │
	│ start   │ -p pause-538591 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                             │ pause-538591                │ jenkins │ v1.37.0 │ 10 Jan 26 02:18 UTC │ 10 Jan 26 02:18 UTC │
	│ pause   │ -p pause-538591 --alsologtostderr -v=5                                                                                       │ pause-538591                │ jenkins │ v1.37.0 │ 10 Jan 26 02:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:18:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:18:05.222000  196543 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:18:05.222610  196543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:18:05.222631  196543 out.go:374] Setting ErrFile to fd 2...
	I0110 02:18:05.222639  196543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:18:05.223217  196543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:18:05.224092  196543 out.go:368] Setting JSON to false
	I0110 02:18:05.225151  196543 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3634,"bootTime":1768007851,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:18:05.225231  196543 start.go:143] virtualization: kvm guest
	I0110 02:18:05.226853  196543 out.go:179] * [pause-538591] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:18:05.228168  196543 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:18:05.228172  196543 notify.go:221] Checking for updates...
	I0110 02:18:05.230183  196543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:18:05.231260  196543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:18:05.232327  196543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:18:05.233372  196543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:18:05.234377  196543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:18:05.235775  196543 config.go:182] Loaded profile config "pause-538591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:18:05.236302  196543 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:18:05.261128  196543 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:18:05.261205  196543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:18:05.319186  196543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 02:18:05.309345582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:18:05.319303  196543 docker.go:319] overlay module found
	I0110 02:18:05.321572  196543 out.go:179] * Using the docker driver based on existing profile
	I0110 02:18:05.322617  196543 start.go:309] selected driver: docker
	I0110 02:18:05.322631  196543 start.go:928] validating driver "docker" against &{Name:pause-538591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:18:05.322769  196543 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:18:05.322867  196543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:18:05.378266  196543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 02:18:05.368568714 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:18:05.378865  196543 cni.go:84] Creating CNI manager for ""
	I0110 02:18:05.378948  196543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:18:05.379005  196543 start.go:353] cluster config:
	{Name:pause-538591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:18:05.380708  196543 out.go:179] * Starting "pause-538591" primary control-plane node in "pause-538591" cluster
	I0110 02:18:05.381770  196543 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:18:05.382822  196543 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:18:05.383960  196543 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:18:05.383998  196543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:18:05.384011  196543 cache.go:65] Caching tarball of preloaded images
	I0110 02:18:05.384057  196543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:18:05.384105  196543 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:18:05.384120  196543 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:18:05.384283  196543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/config.json ...
	I0110 02:18:05.403988  196543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:18:05.404008  196543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:18:05.404023  196543 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:18:05.404047  196543 start.go:360] acquireMachinesLock for pause-538591: {Name:mk88a054b31d3424f521abede3f20061c56e66a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:18:05.404097  196543 start.go:364] duration metric: took 34.796µs to acquireMachinesLock for "pause-538591"
	I0110 02:18:05.404110  196543 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:18:05.404114  196543 fix.go:54] fixHost starting: 
	I0110 02:18:05.404312  196543 cli_runner.go:164] Run: docker container inspect pause-538591 --format={{.State.Status}}
	I0110 02:18:05.421140  196543 fix.go:112] recreateIfNeeded on pause-538591: state=Running err=<nil>
	W0110 02:18:05.421163  196543 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 02:18:01.544512  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	W0110 02:18:04.044444  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	I0110 02:18:05.423020  196543 out.go:252] * Updating the running docker "pause-538591" container ...
	I0110 02:18:05.423046  196543 machine.go:94] provisionDockerMachine start ...
	I0110 02:18:05.423103  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:05.440947  196543 main.go:144] libmachine: Using SSH client type: native
	I0110 02:18:05.441211  196543 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0110 02:18:05.441229  196543 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:18:05.565635  196543 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-538591
	
	I0110 02:18:05.565693  196543 ubuntu.go:182] provisioning hostname "pause-538591"
	I0110 02:18:05.565787  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:05.584268  196543 main.go:144] libmachine: Using SSH client type: native
	I0110 02:18:05.584491  196543 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0110 02:18:05.584502  196543 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-538591 && echo "pause-538591" | sudo tee /etc/hostname
	I0110 02:18:05.719743  196543 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-538591
	
	I0110 02:18:05.719803  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:05.737403  196543 main.go:144] libmachine: Using SSH client type: native
	I0110 02:18:05.737635  196543 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0110 02:18:05.737651  196543 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-538591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-538591/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-538591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:18:05.863066  196543 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:18:05.863094  196543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:18:05.863112  196543 ubuntu.go:190] setting up certificates
	I0110 02:18:05.863133  196543 provision.go:84] configureAuth start
	I0110 02:18:05.863191  196543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-538591
	I0110 02:18:05.880605  196543 provision.go:143] copyHostCerts
	I0110 02:18:05.880673  196543 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:18:05.880689  196543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:18:05.880764  196543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:18:05.880878  196543 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:18:05.880903  196543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:18:05.880939  196543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:18:05.881074  196543 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:18:05.881087  196543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:18:05.881118  196543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:18:05.881210  196543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.pause-538591 san=[127.0.0.1 192.168.85.2 localhost minikube pause-538591]
	I0110 02:18:06.064050  196543 provision.go:177] copyRemoteCerts
	I0110 02:18:06.064100  196543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:18:06.064184  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.082356  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.174604  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:18:06.192064  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 02:18:06.208762  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:18:06.225446  196543 provision.go:87] duration metric: took 362.292289ms to configureAuth
	I0110 02:18:06.225476  196543 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:18:06.225667  196543 config.go:182] Loaded profile config "pause-538591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:18:06.225754  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.244326  196543 main.go:144] libmachine: Using SSH client type: native
	I0110 02:18:06.244574  196543 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0110 02:18:06.244599  196543 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:18:06.557469  196543 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:18:06.557493  196543 machine.go:97] duration metric: took 1.134440381s to provisionDockerMachine
	I0110 02:18:06.557506  196543 start.go:293] postStartSetup for "pause-538591" (driver="docker")
	I0110 02:18:06.557516  196543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:18:06.557571  196543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:18:06.557615  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.575846  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.667694  196543 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:18:06.671052  196543 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:18:06.671074  196543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:18:06.671083  196543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:18:06.671139  196543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:18:06.671226  196543 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:18:06.671333  196543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:18:06.678710  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:18:06.695409  196543 start.go:296] duration metric: took 137.890585ms for postStartSetup
	I0110 02:18:06.695486  196543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:18:06.695543  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.712847  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.803087  196543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:18:06.807951  196543 fix.go:56] duration metric: took 1.40383225s for fixHost
	I0110 02:18:06.807972  196543 start.go:83] releasing machines lock for "pause-538591", held for 1.403867901s
	I0110 02:18:06.808052  196543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-538591
	I0110 02:18:06.826163  196543 ssh_runner.go:195] Run: cat /version.json
	I0110 02:18:06.826238  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.826254  196543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:18:06.826307  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.845264  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.845621  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.986999  196543 ssh_runner.go:195] Run: systemctl --version
	I0110 02:18:06.993837  196543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:18:07.028116  196543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:18:07.032781  196543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:18:07.032846  196543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:18:07.040765  196543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:18:07.040782  196543 start.go:496] detecting cgroup driver to use...
	I0110 02:18:07.040812  196543 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:18:07.040849  196543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:18:07.055449  196543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:18:07.067188  196543 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:18:07.067232  196543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:18:07.081273  196543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:18:07.092608  196543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:18:07.197303  196543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:18:07.305415  196543 docker.go:234] disabling docker service ...
	I0110 02:18:07.305478  196543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:18:07.319243  196543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:18:07.331263  196543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:18:07.436165  196543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:18:07.540521  196543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:18:07.553278  196543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:18:07.567022  196543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:18:07.567076  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.575560  196543 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:18:07.575608  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.584125  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.592332  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.600544  196543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:18:07.608034  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.616716  196543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.624318  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.632783  196543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:18:07.639635  196543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:18:07.646465  196543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:18:07.749425  196543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:18:07.931910  196543 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:18:07.931979  196543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:18:07.936174  196543 start.go:574] Will wait 60s for crictl version
	I0110 02:18:07.936252  196543 ssh_runner.go:195] Run: which crictl
	I0110 02:18:07.940129  196543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:18:07.965282  196543 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:18:07.965347  196543 ssh_runner.go:195] Run: crio --version
	I0110 02:18:07.991240  196543 ssh_runner.go:195] Run: crio --version
	I0110 02:18:08.019478  196543 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:18:08.020514  196543 cli_runner.go:164] Run: docker network inspect pause-538591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:18:08.038443  196543 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:18:08.043184  196543 kubeadm.go:884] updating cluster {Name:pause-538591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:18:08.043326  196543 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:18:08.043401  196543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:18:08.074898  196543 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:18:08.074919  196543 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:18:08.074967  196543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:18:08.100169  196543 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:18:08.100187  196543 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:18:08.100193  196543 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:18:08.100285  196543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-538591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:18:08.100339  196543 ssh_runner.go:195] Run: crio config
	I0110 02:18:08.145938  196543 cni.go:84] Creating CNI manager for ""
	I0110 02:18:08.145969  196543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:18:08.145985  196543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:18:08.146007  196543 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-538591 NodeName:pause-538591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:18:08.146126  196543 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-538591"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:18:08.146186  196543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:18:08.154177  196543 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:18:08.154244  196543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:18:08.161426  196543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0110 02:18:08.174037  196543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:18:08.185627  196543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I0110 02:18:08.197990  196543 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:18:08.201462  196543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:18:08.308877  196543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:18:08.321490  196543 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591 for IP: 192.168.85.2
	I0110 02:18:08.321510  196543 certs.go:195] generating shared ca certs ...
	I0110 02:18:08.321528  196543 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:18:08.321685  196543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:18:08.321747  196543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:18:08.321770  196543 certs.go:257] generating profile certs ...
	I0110 02:18:08.321866  196543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.key
	I0110 02:18:08.321981  196543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/apiserver.key.b28a66ad
	I0110 02:18:08.322058  196543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/proxy-client.key
	I0110 02:18:08.322191  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:18:08.322231  196543 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:18:08.322246  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:18:08.322287  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:18:08.322322  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:18:08.322403  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:18:08.322467  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:18:08.323155  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:18:08.341426  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:18:08.358769  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:18:08.375675  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:18:08.393157  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 02:18:08.409757  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:18:08.427105  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:18:08.443775  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:18:08.460339  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:18:08.476329  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:18:08.492587  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:18:08.508823  196543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:18:08.520512  196543 ssh_runner.go:195] Run: openssl version
	I0110 02:18:08.526352  196543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:18:08.533138  196543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:18:08.540102  196543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:18:08.543978  196543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:18:08.544023  196543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:18:08.578340  196543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:18:08.585473  196543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:18:08.592584  196543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:18:08.599581  196543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:18:08.602956  196543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:18:08.602994  196543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:18:08.637701  196543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:18:08.644821  196543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:18:08.652147  196543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:18:08.658902  196543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:18:08.662245  196543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:18:08.662285  196543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:18:08.695791  196543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:18:08.702923  196543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:18:08.706319  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:18:08.741298  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:18:08.775073  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:18:08.809268  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:18:08.845529  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:18:08.880764  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:18:08.915981  196543 kubeadm.go:401] StartCluster: {Name:pause-538591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:18:08.916119  196543 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:18:08.916196  196543 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:18:08.944588  196543 cri.go:96] found id: "0cd6b46141d3596075a2e35864f26a9594c5564c9062d010092d222220a53327"
	I0110 02:18:08.944609  196543 cri.go:96] found id: "79522e27c31af219993fe174e5a5620c061de48834cf7aa6d5ae97a3c3dad960"
	I0110 02:18:08.944612  196543 cri.go:96] found id: "fb80d4f2e5233ecb8291e4c8170a081344a456dc06680eb325439cb4143a57f6"
	I0110 02:18:08.944616  196543 cri.go:96] found id: "0c62c16a7490e6c973be02b02136b8349d9805422c1a764e4203f3fb440bf8f8"
	I0110 02:18:08.944619  196543 cri.go:96] found id: "cca61452faa7940827281bf294cac47345ecab8de8eea2746c864d19445165f8"
	I0110 02:18:08.944622  196543 cri.go:96] found id: "2fe6e76f66283939a29885bc06387085aef282e01e58f5573ff55899e4308598"
	I0110 02:18:08.944624  196543 cri.go:96] found id: "912bde97aebf1616ad9e4d63d56ffd6f1612b5c2adfa2f3a77dc4d260380497b"
	I0110 02:18:08.944627  196543 cri.go:96] found id: ""
	I0110 02:18:08.944675  196543 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:18:08.956130  196543 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:18:08Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:18:08.956199  196543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:18:08.963854  196543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:18:08.963875  196543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:18:08.963932  196543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:18:08.970991  196543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:18:08.971698  196543 kubeconfig.go:125] found "pause-538591" server: "https://192.168.85.2:8443"
	I0110 02:18:08.972763  196543 kapi.go:59] client config for pause-538591: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.crt", KeyFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.key", CAFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 02:18:08.973167  196543 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0110 02:18:08.973182  196543 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0110 02:18:08.973188  196543 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0110 02:18:08.973194  196543 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0110 02:18:08.973200  196543 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0110 02:18:08.973205  196543 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0110 02:18:08.973541  196543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:18:08.980912  196543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 02:18:08.980940  196543 kubeadm.go:602] duration metric: took 17.05937ms to restartPrimaryControlPlane
	I0110 02:18:08.980947  196543 kubeadm.go:403] duration metric: took 64.976546ms to StartCluster
	I0110 02:18:08.980959  196543 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:18:08.981012  196543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:18:08.982065  196543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:18:08.982274  196543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:18:08.982339  196543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:18:08.982566  196543 config.go:182] Loaded profile config "pause-538591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:18:08.984874  196543 out.go:179] * Verifying Kubernetes components...
	I0110 02:18:08.984875  196543 out.go:179] * Enabled addons: 
	I0110 02:18:08.277763  182176 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0110 02:18:08.277803  182176 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:18:08.985958  196543 addons.go:530] duration metric: took 3.623194ms for enable addons: enabled=[]
	I0110 02:18:08.985984  196543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:18:09.091045  196543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:18:09.103738  196543 node_ready.go:35] waiting up to 6m0s for node "pause-538591" to be "Ready" ...
	I0110 02:18:09.110995  196543 node_ready.go:49] node "pause-538591" is "Ready"
	I0110 02:18:09.111015  196543 node_ready.go:38] duration metric: took 7.246677ms for node "pause-538591" to be "Ready" ...
	I0110 02:18:09.111027  196543 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:18:09.111074  196543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:18:09.121923  196543 api_server.go:72] duration metric: took 139.625732ms to wait for apiserver process to appear ...
	I0110 02:18:09.121939  196543 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:18:09.121954  196543 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:18:09.126791  196543 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:18:09.127650  196543 api_server.go:141] control plane version: v1.35.0
	I0110 02:18:09.127677  196543 api_server.go:131] duration metric: took 5.731141ms to wait for apiserver health ...
	I0110 02:18:09.127686  196543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:18:09.130815  196543 system_pods.go:59] 7 kube-system pods found
	I0110 02:18:09.130844  196543 system_pods.go:61] "coredns-7d764666f9-r5f6q" [a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5] Running
	I0110 02:18:09.130851  196543 system_pods.go:61] "etcd-pause-538591" [ac5b5c4a-75f1-47a6-a54d-dcec6470d3ff] Running
	I0110 02:18:09.130857  196543 system_pods.go:61] "kindnet-d5gn6" [5a216eb7-0550-4d24-ba73-869400727115] Running
	I0110 02:18:09.130866  196543 system_pods.go:61] "kube-apiserver-pause-538591" [f93ce20d-576b-4015-8089-93c3fec476f4] Running
	I0110 02:18:09.130876  196543 system_pods.go:61] "kube-controller-manager-pause-538591" [0c7dd188-7c10-4049-86c4-2e171f514da3] Running
	I0110 02:18:09.130882  196543 system_pods.go:61] "kube-proxy-r9czs" [4e23fd55-d594-409b-bed8-74f9c7a7d159] Running
	I0110 02:18:09.130898  196543 system_pods.go:61] "kube-scheduler-pause-538591" [719a271a-e4a3-4437-8c23-5a69e3dec0c1] Running
	I0110 02:18:09.130903  196543 system_pods.go:74] duration metric: took 3.211573ms to wait for pod list to return data ...
	I0110 02:18:09.130911  196543 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:18:09.132597  196543 default_sa.go:45] found service account: "default"
	I0110 02:18:09.132616  196543 default_sa.go:55] duration metric: took 1.699216ms for default service account to be created ...
	I0110 02:18:09.132625  196543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:18:09.134920  196543 system_pods.go:86] 7 kube-system pods found
	I0110 02:18:09.134943  196543 system_pods.go:89] "coredns-7d764666f9-r5f6q" [a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5] Running
	I0110 02:18:09.134950  196543 system_pods.go:89] "etcd-pause-538591" [ac5b5c4a-75f1-47a6-a54d-dcec6470d3ff] Running
	I0110 02:18:09.134955  196543 system_pods.go:89] "kindnet-d5gn6" [5a216eb7-0550-4d24-ba73-869400727115] Running
	I0110 02:18:09.134959  196543 system_pods.go:89] "kube-apiserver-pause-538591" [f93ce20d-576b-4015-8089-93c3fec476f4] Running
	I0110 02:18:09.134963  196543 system_pods.go:89] "kube-controller-manager-pause-538591" [0c7dd188-7c10-4049-86c4-2e171f514da3] Running
	I0110 02:18:09.134966  196543 system_pods.go:89] "kube-proxy-r9czs" [4e23fd55-d594-409b-bed8-74f9c7a7d159] Running
	I0110 02:18:09.134970  196543 system_pods.go:89] "kube-scheduler-pause-538591" [719a271a-e4a3-4437-8c23-5a69e3dec0c1] Running
	I0110 02:18:09.134975  196543 system_pods.go:126] duration metric: took 2.345852ms to wait for k8s-apps to be running ...
	I0110 02:18:09.134983  196543 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:18:09.135015  196543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:18:09.147867  196543 system_svc.go:56] duration metric: took 12.876267ms WaitForService to wait for kubelet
	I0110 02:18:09.147901  196543 kubeadm.go:587] duration metric: took 165.603554ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:18:09.147922  196543 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:18:09.150184  196543 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:18:09.150204  196543 node_conditions.go:123] node cpu capacity is 8
	I0110 02:18:09.150216  196543 node_conditions.go:105] duration metric: took 2.288867ms to run NodePressure ...
	I0110 02:18:09.150224  196543 start.go:242] waiting for startup goroutines ...
	I0110 02:18:09.150231  196543 start.go:247] waiting for cluster config update ...
	I0110 02:18:09.150237  196543 start.go:256] writing updated cluster config ...
	I0110 02:18:09.150488  196543 ssh_runner.go:195] Run: rm -f paused
	I0110 02:18:09.153984  196543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:18:09.155012  196543 kapi.go:59] client config for pause-538591: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.crt", KeyFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.key", CAFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 02:18:09.157211  196543 pod_ready.go:83] waiting for pod "coredns-7d764666f9-r5f6q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.160808  196543 pod_ready.go:94] pod "coredns-7d764666f9-r5f6q" is "Ready"
	I0110 02:18:09.160824  196543 pod_ready.go:86] duration metric: took 3.595771ms for pod "coredns-7d764666f9-r5f6q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.162388  196543 pod_ready.go:83] waiting for pod "etcd-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.165605  196543 pod_ready.go:94] pod "etcd-pause-538591" is "Ready"
	I0110 02:18:09.165620  196543 pod_ready.go:86] duration metric: took 3.215946ms for pod "etcd-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.167258  196543 pod_ready.go:83] waiting for pod "kube-apiserver-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.170361  196543 pod_ready.go:94] pod "kube-apiserver-pause-538591" is "Ready"
	I0110 02:18:09.170375  196543 pod_ready.go:86] duration metric: took 3.102577ms for pod "kube-apiserver-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.172038  196543 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.558432  196543 pod_ready.go:94] pod "kube-controller-manager-pause-538591" is "Ready"
	I0110 02:18:09.558461  196543 pod_ready.go:86] duration metric: took 386.406548ms for pod "kube-controller-manager-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.758260  196543 pod_ready.go:83] waiting for pod "kube-proxy-r9czs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:10.158148  196543 pod_ready.go:94] pod "kube-proxy-r9czs" is "Ready"
	I0110 02:18:10.158172  196543 pod_ready.go:86] duration metric: took 399.884687ms for pod "kube-proxy-r9czs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:10.357743  196543 pod_ready.go:83] waiting for pod "kube-scheduler-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:10.757950  196543 pod_ready.go:94] pod "kube-scheduler-pause-538591" is "Ready"
	I0110 02:18:10.757972  196543 pod_ready.go:86] duration metric: took 400.2062ms for pod "kube-scheduler-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:10.757983  196543 pod_ready.go:40] duration metric: took 1.603972285s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:18:10.800739  196543 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:18:10.802252  196543 out.go:179] * Done! kubectl is now configured to use "pause-538591" cluster and "default" namespace by default
	W0110 02:18:06.543542  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	W0110 02:18:08.543811  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	W0110 02:18:10.544092  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	I0110 02:18:12.960396  195868 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 7e8addb421f6b8dceecc99ab6c564d04e1229f89e4e14a433f888de09a131471 23e60fb805bf913c8de5c75674050edb56c2c2c1a0cb07f6ca7919ce9da4d635 1f02b4d027bb56e5757f5edece988137e189cbc8698338347517bcc363273b75 3a648d3ef5f4835c1c0c082f0389fe5b710dfcc48dc9a4b5e70e1aa6afa8fbbd: (17.867491096s)
	I0110 02:18:12.960461  195868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:18:12.975134  195868 out.go:179]   - Kubernetes: Stopped
	I0110 02:18:12.977471  195868 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:18:13.016291  195868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:18:13.020869  195868 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:18:13.020945  195868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:18:13.028952  195868 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:18:13.028970  195868 start.go:496] detecting cgroup driver to use...
	I0110 02:18:13.028995  195868 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:18:13.029038  195868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:18:13.043722  195868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:18:13.055691  195868 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:18:13.055739  195868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:18:13.069247  195868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	
	
	==> CRI-O <==
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.847703377Z" level=info msg="RDT not available in the host system"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.847717202Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.848495377Z" level=info msg="Conmon does support the --sync option"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.848511873Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.848523698Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.849235286Z" level=info msg="Conmon does support the --sync option"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.849248645Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.854203839Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.854222879Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.854703599Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"en
forcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [cri
o.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.855098787Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.855144049Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926180505Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-r5f6q Namespace:kube-system ID:70a74cb4ac8718c0f13cc4f89b977996bf21aa5cbd3dd79f5d0fee6b35e0771f UID:a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5 NetNS:/var/run/netns/b3a67363-7079-4356-a0f2-b34573c88545 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00007eb38}] Aliases:map[]}"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926373093Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-r5f6q for CNI network kindnet (type=ptp)"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926800898Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926828554Z" level=info msg="Starting seccomp notifier watcher"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926933075Z" level=info msg="Create NRI interface"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.927520962Z" level=info msg="built-in NRI default validator is disabled"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.927561863Z" level=info msg="runtime interface created"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.92758639Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.92759486Z" level=info msg="runtime interface starting up..."
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.927603074Z" level=info msg="starting plugins..."
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.927626155Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.928305677Z" level=info msg="No systemd watchdog enabled"
	Jan 10 02:18:07 pause-538591 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0cd6b46141d35       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     11 seconds ago      Running             coredns                   0                   70a74cb4ac871       coredns-7d764666f9-r5f6q               kube-system
	79522e27c31af       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   22 seconds ago      Running             kindnet-cni               0                   49bf448dcc1c2       kindnet-d5gn6                          kube-system
	fb80d4f2e5233       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     24 seconds ago      Running             kube-proxy                0                   5e412cef28acb       kube-proxy-r9czs                       kube-system
	0c62c16a7490e       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     34 seconds ago      Running             kube-apiserver            0                   7a9626a5b91e6       kube-apiserver-pause-538591            kube-system
	cca61452faa79       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     34 seconds ago      Running             etcd                      0                   bffbb4929d539       etcd-pause-538591                      kube-system
	2fe6e76f66283       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     34 seconds ago      Running             kube-controller-manager   0                   214f664cb819f       kube-controller-manager-pause-538591   kube-system
	912bde97aebf1       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     34 seconds ago      Running             kube-scheduler            0                   02f390ee2e2d0       kube-scheduler-pause-538591            kube-system
	
	
	==> coredns [0cd6b46141d3596075a2e35864f26a9594c5564c9062d010092d222220a53327] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52056 - 8255 "HINFO IN 6304715311341725238.8675739646746251486. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064956627s
	
	
	==> describe nodes <==
	Name:               pause-538591
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-538591
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=pause-538591
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_17_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:17:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-538591
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:18:02 +0000   Sat, 10 Jan 2026 02:17:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:18:02 +0000   Sat, 10 Jan 2026 02:17:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:18:02 +0000   Sat, 10 Jan 2026 02:17:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:18:02 +0000   Sat, 10 Jan 2026 02:18:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-538591
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                dbe282f3-bb05-42f0-8b46-e291ddf73e29
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-r5f6q                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-538591                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-d5gn6                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-538591             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-538591    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-r9czs                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-538591             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node pause-538591 event: Registered Node pause-538591 in Controller
	
	
	==> dmesg <==
	[Jan10 01:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001880] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378214] i8042: Warning: Keylock active
	[  +0.012673] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498024] block sda: the capability attribute has been deprecated.
	[  +0.086955] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024715] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [cca61452faa7940827281bf294cac47345ecab8de8eea2746c864d19445165f8] <==
	{"level":"info","ts":"2026-01-10T02:17:39.508558Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:17:40.392040Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:17:40.392206Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:17:40.392271Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-10T02:17:40.392289Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:17:40.392309Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:17:40.392908Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:17:40.392935Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:17:40.392952Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:17:40.392962Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:17:40.393525Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:17:40.394068Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-538591 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:17:40.394143Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:17:40.394159Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:17:40.394371Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:17:40.394650Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:17:40.394703Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:17:40.395510Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:17:40.395630Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:17:40.395634Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:17:40.395759Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:17:40.394396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:17:40.395782Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:17:40.398790Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T02:17:40.399451Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:18:14 up  1:00,  0 user,  load average: 3.08, 2.06, 1.30
	Linux pause-538591 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [79522e27c31af219993fe174e5a5620c061de48834cf7aa6d5ae97a3c3dad960] <==
	I0110 02:17:52.113268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:17:52.113579       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:17:52.113724       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:17:52.113800       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:17:52.113849       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:17:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:17:52.317993       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:17:52.318032       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:17:52.318047       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:17:52.318836       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:17:52.711826       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:17:52.711921       1 metrics.go:72] Registering metrics
	I0110 02:17:52.749722       1 controller.go:711] "Syncing nftables rules"
	I0110 02:18:02.318596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:18:02.318684       1 main.go:301] handling current node
	I0110 02:18:12.325706       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:18:12.325739       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c62c16a7490e6c973be02b02136b8349d9805422c1a764e4203f3fb440bf8f8] <==
	I0110 02:17:41.676962       1 policy_source.go:248] refreshing policies
	E0110 02:17:41.677083       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0110 02:17:41.703636       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 02:17:41.751606       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:17:41.759755       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:17:41.759826       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:17:41.780388       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:17:41.879393       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:17:42.602750       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:17:42.680483       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:17:42.680502       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:17:43.301297       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:17:43.338960       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:17:43.456562       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:17:43.462513       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0110 02:17:43.463684       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:17:43.467778       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:17:43.615789       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:17:44.449564       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:17:44.458039       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:17:44.465424       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:17:49.168461       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:17:49.220093       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:17:49.224046       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:17:49.620822       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [2fe6e76f66283939a29885bc06387085aef282e01e58f5573ff55899e4308598] <==
	I0110 02:17:48.418658       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419684       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419730       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419747       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419757       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419877       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419943       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420053       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420180       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420221       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420237       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420315       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420324       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420522       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420325       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.425975       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.426051       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.426081       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.431738       1 range_allocator.go:433] "Set node PodCIDR" node="pause-538591" podCIDRs=["10.244.0.0/24"]
	I0110 02:17:48.435705       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:17:48.520505       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.520522       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:17:48.520526       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:17:48.536416       1 shared_informer.go:377] "Caches are synced"
	I0110 02:18:03.421121       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [fb80d4f2e5233ecb8291e4c8170a081344a456dc06680eb325439cb4143a57f6] <==
	I0110 02:17:50.021503       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:17:50.089965       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:17:50.190657       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:50.190698       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 02:17:50.190800       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:17:50.208830       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:17:50.208922       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:17:50.213869       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:17:50.214201       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:17:50.214218       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:17:50.215591       1 config.go:200] "Starting service config controller"
	I0110 02:17:50.215621       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:17:50.215627       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:17:50.215654       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:17:50.215696       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:17:50.215728       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:17:50.215708       1 config.go:309] "Starting node config controller"
	I0110 02:17:50.215772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:17:50.215779       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:17:50.316222       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:17:50.316246       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:17:50.316340       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [912bde97aebf1616ad9e4d63d56ffd6f1612b5c2adfa2f3a77dc4d260380497b] <==
	E0110 02:17:41.636461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:17:41.636498       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:17:41.636582       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:17:41.636798       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:17:41.636787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:17:42.491980       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:17:42.518058       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:17:42.624608       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:17:42.641656       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:17:42.643357       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:17:42.647778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:17:42.649117       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:17:42.749577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:17:42.768942       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:17:42.793028       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:17:42.820183       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:17:42.837500       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:17:42.892726       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:17:42.915238       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:17:42.966996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:17:43.000662       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 02:17:43.029910       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:17:43.063007       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:17:43.130950       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I0110 02:17:46.026475       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.691881    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a216eb7-0550-4d24-ba73-869400727115-lib-modules\") pod \"kindnet-d5gn6\" (UID: \"5a216eb7-0550-4d24-ba73-869400727115\") " pod="kube-system/kindnet-d5gn6"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.691938    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a216eb7-0550-4d24-ba73-869400727115-xtables-lock\") pod \"kindnet-d5gn6\" (UID: \"5a216eb7-0550-4d24-ba73-869400727115\") " pod="kube-system/kindnet-d5gn6"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.691993    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e23fd55-d594-409b-bed8-74f9c7a7d159-kube-proxy\") pod \"kube-proxy-r9czs\" (UID: \"4e23fd55-d594-409b-bed8-74f9c7a7d159\") " pod="kube-system/kube-proxy-r9czs"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.692064    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e23fd55-d594-409b-bed8-74f9c7a7d159-xtables-lock\") pod \"kube-proxy-r9czs\" (UID: \"4e23fd55-d594-409b-bed8-74f9c7a7d159\") " pod="kube-system/kube-proxy-r9czs"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.692140    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tkwm\" (UniqueName: \"kubernetes.io/projected/5a216eb7-0550-4d24-ba73-869400727115-kube-api-access-4tkwm\") pod \"kindnet-d5gn6\" (UID: \"5a216eb7-0550-4d24-ba73-869400727115\") " pod="kube-system/kindnet-d5gn6"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.692178    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e23fd55-d594-409b-bed8-74f9c7a7d159-lib-modules\") pod \"kube-proxy-r9czs\" (UID: \"4e23fd55-d594-409b-bed8-74f9c7a7d159\") " pod="kube-system/kube-proxy-r9czs"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.692207    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lm5\" (UniqueName: \"kubernetes.io/projected/4e23fd55-d594-409b-bed8-74f9c7a7d159-kube-api-access-l7lm5\") pod \"kube-proxy-r9czs\" (UID: \"4e23fd55-d594-409b-bed8-74f9c7a7d159\") " pod="kube-system/kube-proxy-r9czs"
	Jan 10 02:17:50 pause-538591 kubelet[1301]: I0110 02:17:50.304558    1301 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-r9czs" podStartSLOduration=1.304536874 podStartE2EDuration="1.304536874s" podCreationTimestamp="2026-01-10 02:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:17:50.304510653 +0000 UTC m=+6.124533517" watchObservedRunningTime="2026-01-10 02:17:50.304536874 +0000 UTC m=+6.124559739"
	Jan 10 02:17:52 pause-538591 kubelet[1301]: E0110 02:17:52.222445    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-538591" containerName="kube-scheduler"
	Jan 10 02:17:52 pause-538591 kubelet[1301]: I0110 02:17:52.309560    1301 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-d5gn6" podStartSLOduration=1.467899787 podStartE2EDuration="3.309542333s" podCreationTimestamp="2026-01-10 02:17:49 +0000 UTC" firstStartedPulling="2026-01-10 02:17:49.952033173 +0000 UTC m=+5.772056032" lastFinishedPulling="2026-01-10 02:17:51.79367572 +0000 UTC m=+7.613698578" observedRunningTime="2026-01-10 02:17:52.309282766 +0000 UTC m=+8.129305631" watchObservedRunningTime="2026-01-10 02:17:52.309542333 +0000 UTC m=+8.129565198"
	Jan 10 02:17:54 pause-538591 kubelet[1301]: E0110 02:17:54.821570    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-538591" containerName="kube-controller-manager"
	Jan 10 02:17:56 pause-538591 kubelet[1301]: E0110 02:17:56.614613    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-538591" containerName="kube-apiserver"
	Jan 10 02:17:59 pause-538591 kubelet[1301]: E0110 02:17:59.604765    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-538591" containerName="etcd"
	Jan 10 02:18:02 pause-538591 kubelet[1301]: E0110 02:18:02.227392    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-538591" containerName="kube-scheduler"
	Jan 10 02:18:02 pause-538591 kubelet[1301]: I0110 02:18:02.766929    1301 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 02:18:02 pause-538591 kubelet[1301]: I0110 02:18:02.894401    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5-config-volume\") pod \"coredns-7d764666f9-r5f6q\" (UID: \"a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5\") " pod="kube-system/coredns-7d764666f9-r5f6q"
	Jan 10 02:18:02 pause-538591 kubelet[1301]: I0110 02:18:02.894453    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bmjc\" (UniqueName: \"kubernetes.io/projected/a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5-kube-api-access-4bmjc\") pod \"coredns-7d764666f9-r5f6q\" (UID: \"a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5\") " pod="kube-system/coredns-7d764666f9-r5f6q"
	Jan 10 02:18:03 pause-538591 kubelet[1301]: E0110 02:18:03.322689    1301 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r5f6q" containerName="coredns"
	Jan 10 02:18:03 pause-538591 kubelet[1301]: I0110 02:18:03.332733    1301 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-r5f6q" podStartSLOduration=14.332716919 podStartE2EDuration="14.332716919s" podCreationTimestamp="2026-01-10 02:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:18:03.332544263 +0000 UTC m=+19.152567323" watchObservedRunningTime="2026-01-10 02:18:03.332716919 +0000 UTC m=+19.152739784"
	Jan 10 02:18:04 pause-538591 kubelet[1301]: E0110 02:18:04.324332    1301 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r5f6q" containerName="coredns"
	Jan 10 02:18:05 pause-538591 kubelet[1301]: E0110 02:18:05.326507    1301 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r5f6q" containerName="coredns"
	Jan 10 02:18:11 pause-538591 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:18:11 pause-538591 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:18:11 pause-538591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:18:11 pause-538591 systemd[1]: kubelet.service: Consumed 1.178s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-538591 -n pause-538591
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-538591 -n pause-538591: exit status 2 (351.972122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-538591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-538591
helpers_test.go:244: (dbg) docker inspect pause-538591:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92",
	        "Created": "2026-01-10T02:17:28.899977522Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 187468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:17:28.930713709Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92/hostname",
	        "HostsPath": "/var/lib/docker/containers/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92/hosts",
	        "LogPath": "/var/lib/docker/containers/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92/2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92-json.log",
	        "Name": "/pause-538591",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-538591:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-538591",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2260504e62d7d5c7fcd77a31950c21e4976f387290ab01574150721bca6f4b92",
	                "LowerDir": "/var/lib/docker/overlay2/2c19f595a2117f6d6b6a5feb5054902e8090ef692ab502c61cab0ba36b1f2795-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c19f595a2117f6d6b6a5feb5054902e8090ef692ab502c61cab0ba36b1f2795/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c19f595a2117f6d6b6a5feb5054902e8090ef692ab502c61cab0ba36b1f2795/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c19f595a2117f6d6b6a5feb5054902e8090ef692ab502c61cab0ba36b1f2795/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-538591",
	                "Source": "/var/lib/docker/volumes/pause-538591/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-538591",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-538591",
	                "name.minikube.sigs.k8s.io": "pause-538591",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d9069ac7cb5668c10a248a42d759b78ad193561794d7129ea4d4ca666561c147",
	            "SandboxKey": "/var/run/docker/netns/d9069ac7cb56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32994"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32993"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-538591": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "36808e08cac5f8c938bdeaf4b758395d8a3a1672776fb718c6d682efc7e7a7b8",
	                    "EndpointID": "e04263f73ce86b6ce8087388c985a75ccdc872d7e238998b84130cd15a3a9f71",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1e:5d:07:0a:99:82",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-538591",
	                        "2260504e62d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-538591 -n pause-538591
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-538591 -n pause-538591: exit status 2 (341.775723ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-538591 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-930407 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:15 UTC │                     │
	│ stop    │ -p scheduled-stop-930407 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:15 UTC │                     │
	│ stop    │ -p scheduled-stop-930407 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:15 UTC │ 10 Jan 26 02:15 UTC │
	│ delete  │ -p scheduled-stop-930407                                                                                                     │ scheduled-stop-930407       │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:16 UTC │
	│ start   │ -p insufficient-storage-051762 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio             │ insufficient-storage-051762 │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │                     │
	│ delete  │ -p insufficient-storage-051762                                                                                               │ insufficient-storage-051762 │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:16 UTC │
	│ start   │ -p offline-crio-092866 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio            │ offline-crio-092866         │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p test-preload-107034 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio │ test-preload-107034         │ jenkins │ v1.37.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p stopped-upgrade-116273 --memory=3072 --vm-driver=docker  --container-runtime=crio                                         │ stopped-upgrade-116273      │ jenkins │ v1.35.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p running-upgrade-138757 --memory=3072 --vm-driver=docker  --container-runtime=crio                                         │ running-upgrade-138757      │ jenkins │ v1.35.0 │ 10 Jan 26 02:16 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p running-upgrade-138757 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                     │ running-upgrade-138757      │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │                     │
	│ stop    │ stopped-upgrade-116273 stop                                                                                                  │ stopped-upgrade-116273      │ jenkins │ v1.35.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p stopped-upgrade-116273 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                     │ stopped-upgrade-116273      │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ delete  │ -p offline-crio-092866                                                                                                       │ offline-crio-092866         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ image   │ test-preload-107034 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                  │ test-preload-107034         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p pause-538591 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                    │ pause-538591                │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:18 UTC │
	│ stop    │ -p test-preload-107034                                                                                                       │ test-preload-107034         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p test-preload-107034 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio           │ test-preload-107034         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │                     │
	│ delete  │ -p stopped-upgrade-116273                                                                                                    │ stopped-upgrade-116273      │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p NoKubernetes-731674 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                │ NoKubernetes-731674         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │                     │
	│ start   │ -p NoKubernetes-731674 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                        │ NoKubernetes-731674         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:17 UTC │
	│ start   │ -p NoKubernetes-731674 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio        │ NoKubernetes-731674         │ jenkins │ v1.37.0 │ 10 Jan 26 02:17 UTC │ 10 Jan 26 02:18 UTC │
	│ start   │ -p pause-538591 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                             │ pause-538591                │ jenkins │ v1.37.0 │ 10 Jan 26 02:18 UTC │ 10 Jan 26 02:18 UTC │
	│ pause   │ -p pause-538591 --alsologtostderr -v=5                                                                                       │ pause-538591                │ jenkins │ v1.37.0 │ 10 Jan 26 02:18 UTC │                     │
	│ delete  │ -p NoKubernetes-731674                                                                                                       │ NoKubernetes-731674         │ jenkins │ v1.37.0 │ 10 Jan 26 02:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:18:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:18:05.222000  196543 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:18:05.222610  196543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:18:05.222631  196543 out.go:374] Setting ErrFile to fd 2...
	I0110 02:18:05.222639  196543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:18:05.223217  196543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:18:05.224092  196543 out.go:368] Setting JSON to false
	I0110 02:18:05.225151  196543 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3634,"bootTime":1768007851,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:18:05.225231  196543 start.go:143] virtualization: kvm guest
	I0110 02:18:05.226853  196543 out.go:179] * [pause-538591] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:18:05.228168  196543 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:18:05.228172  196543 notify.go:221] Checking for updates...
	I0110 02:18:05.230183  196543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:18:05.231260  196543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:18:05.232327  196543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:18:05.233372  196543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:18:05.234377  196543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:18:05.235775  196543 config.go:182] Loaded profile config "pause-538591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:18:05.236302  196543 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:18:05.261128  196543 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:18:05.261205  196543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:18:05.319186  196543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 02:18:05.309345582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:18:05.319303  196543 docker.go:319] overlay module found
	I0110 02:18:05.321572  196543 out.go:179] * Using the docker driver based on existing profile
	I0110 02:18:05.322617  196543 start.go:309] selected driver: docker
	I0110 02:18:05.322631  196543 start.go:928] validating driver "docker" against &{Name:pause-538591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:18:05.322769  196543 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:18:05.322867  196543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:18:05.378266  196543 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 02:18:05.368568714 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:18:05.378865  196543 cni.go:84] Creating CNI manager for ""
	I0110 02:18:05.378948  196543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:18:05.379005  196543 start.go:353] cluster config:
	{Name:pause-538591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:18:05.380708  196543 out.go:179] * Starting "pause-538591" primary control-plane node in "pause-538591" cluster
	I0110 02:18:05.381770  196543 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:18:05.382822  196543 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:18:05.383960  196543 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:18:05.383998  196543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:18:05.384011  196543 cache.go:65] Caching tarball of preloaded images
	I0110 02:18:05.384057  196543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:18:05.384105  196543 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:18:05.384120  196543 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:18:05.384283  196543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/config.json ...
	I0110 02:18:05.403988  196543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:18:05.404008  196543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:18:05.404023  196543 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:18:05.404047  196543 start.go:360] acquireMachinesLock for pause-538591: {Name:mk88a054b31d3424f521abede3f20061c56e66a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:18:05.404097  196543 start.go:364] duration metric: took 34.796µs to acquireMachinesLock for "pause-538591"
	I0110 02:18:05.404110  196543 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:18:05.404114  196543 fix.go:54] fixHost starting: 
	I0110 02:18:05.404312  196543 cli_runner.go:164] Run: docker container inspect pause-538591 --format={{.State.Status}}
	I0110 02:18:05.421140  196543 fix.go:112] recreateIfNeeded on pause-538591: state=Running err=<nil>
	W0110 02:18:05.421163  196543 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 02:18:01.544512  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	W0110 02:18:04.044444  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	I0110 02:18:05.423020  196543 out.go:252] * Updating the running docker "pause-538591" container ...
	I0110 02:18:05.423046  196543 machine.go:94] provisionDockerMachine start ...
	I0110 02:18:05.423103  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:05.440947  196543 main.go:144] libmachine: Using SSH client type: native
	I0110 02:18:05.441211  196543 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0110 02:18:05.441229  196543 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:18:05.565635  196543 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-538591
	
	I0110 02:18:05.565693  196543 ubuntu.go:182] provisioning hostname "pause-538591"
	I0110 02:18:05.565787  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:05.584268  196543 main.go:144] libmachine: Using SSH client type: native
	I0110 02:18:05.584491  196543 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0110 02:18:05.584502  196543 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-538591 && echo "pause-538591" | sudo tee /etc/hostname
	I0110 02:18:05.719743  196543 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-538591
	
	I0110 02:18:05.719803  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:05.737403  196543 main.go:144] libmachine: Using SSH client type: native
	I0110 02:18:05.737635  196543 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0110 02:18:05.737651  196543 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-538591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-538591/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-538591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:18:05.863066  196543 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:18:05.863094  196543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:18:05.863112  196543 ubuntu.go:190] setting up certificates
	I0110 02:18:05.863133  196543 provision.go:84] configureAuth start
	I0110 02:18:05.863191  196543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-538591
	I0110 02:18:05.880605  196543 provision.go:143] copyHostCerts
	I0110 02:18:05.880673  196543 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:18:05.880689  196543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:18:05.880764  196543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:18:05.880878  196543 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:18:05.880903  196543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:18:05.880939  196543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:18:05.881074  196543 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:18:05.881087  196543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:18:05.881118  196543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:18:05.881210  196543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.pause-538591 san=[127.0.0.1 192.168.85.2 localhost minikube pause-538591]
	I0110 02:18:06.064050  196543 provision.go:177] copyRemoteCerts
	I0110 02:18:06.064100  196543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:18:06.064184  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.082356  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.174604  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:18:06.192064  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 02:18:06.208762  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:18:06.225446  196543 provision.go:87] duration metric: took 362.292289ms to configureAuth
	I0110 02:18:06.225476  196543 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:18:06.225667  196543 config.go:182] Loaded profile config "pause-538591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:18:06.225754  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.244326  196543 main.go:144] libmachine: Using SSH client type: native
	I0110 02:18:06.244574  196543 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0110 02:18:06.244599  196543 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:18:06.557469  196543 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:18:06.557493  196543 machine.go:97] duration metric: took 1.134440381s to provisionDockerMachine
	I0110 02:18:06.557506  196543 start.go:293] postStartSetup for "pause-538591" (driver="docker")
	I0110 02:18:06.557516  196543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:18:06.557571  196543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:18:06.557615  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.575846  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.667694  196543 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:18:06.671052  196543 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:18:06.671074  196543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:18:06.671083  196543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:18:06.671139  196543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:18:06.671226  196543 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:18:06.671333  196543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:18:06.678710  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:18:06.695409  196543 start.go:296] duration metric: took 137.890585ms for postStartSetup
	I0110 02:18:06.695486  196543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:18:06.695543  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.712847  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.803087  196543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:18:06.807951  196543 fix.go:56] duration metric: took 1.40383225s for fixHost
	I0110 02:18:06.807972  196543 start.go:83] releasing machines lock for "pause-538591", held for 1.403867901s
	I0110 02:18:06.808052  196543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-538591
	I0110 02:18:06.826163  196543 ssh_runner.go:195] Run: cat /version.json
	I0110 02:18:06.826238  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.826254  196543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:18:06.826307  196543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538591
	I0110 02:18:06.845264  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.845621  196543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/pause-538591/id_rsa Username:docker}
	I0110 02:18:06.986999  196543 ssh_runner.go:195] Run: systemctl --version
	I0110 02:18:06.993837  196543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:18:07.028116  196543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:18:07.032781  196543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:18:07.032846  196543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:18:07.040765  196543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:18:07.040782  196543 start.go:496] detecting cgroup driver to use...
	I0110 02:18:07.040812  196543 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:18:07.040849  196543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:18:07.055449  196543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:18:07.067188  196543 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:18:07.067232  196543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:18:07.081273  196543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:18:07.092608  196543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:18:07.197303  196543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:18:07.305415  196543 docker.go:234] disabling docker service ...
	I0110 02:18:07.305478  196543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:18:07.319243  196543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:18:07.331263  196543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:18:07.436165  196543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:18:07.540521  196543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:18:07.553278  196543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:18:07.567022  196543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:18:07.567076  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.575560  196543 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:18:07.575608  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.584125  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.592332  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.600544  196543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:18:07.608034  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.616716  196543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.624318  196543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:07.632783  196543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:18:07.639635  196543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:18:07.646465  196543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:18:07.749425  196543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:18:07.931910  196543 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:18:07.931979  196543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:18:07.936174  196543 start.go:574] Will wait 60s for crictl version
	I0110 02:18:07.936252  196543 ssh_runner.go:195] Run: which crictl
	I0110 02:18:07.940129  196543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:18:07.965282  196543 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:18:07.965347  196543 ssh_runner.go:195] Run: crio --version
	I0110 02:18:07.991240  196543 ssh_runner.go:195] Run: crio --version
	I0110 02:18:08.019478  196543 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:18:08.020514  196543 cli_runner.go:164] Run: docker network inspect pause-538591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:18:08.038443  196543 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:18:08.043184  196543 kubeadm.go:884] updating cluster {Name:pause-538591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:18:08.043326  196543 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:18:08.043401  196543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:18:08.074898  196543 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:18:08.074919  196543 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:18:08.074967  196543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:18:08.100169  196543 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:18:08.100187  196543 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:18:08.100193  196543 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:18:08.100285  196543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-538591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:18:08.100339  196543 ssh_runner.go:195] Run: crio config
	I0110 02:18:08.145938  196543 cni.go:84] Creating CNI manager for ""
	I0110 02:18:08.145969  196543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:18:08.145985  196543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:18:08.146007  196543 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-538591 NodeName:pause-538591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:18:08.146126  196543 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-538591"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:18:08.146186  196543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:18:08.154177  196543 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:18:08.154244  196543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:18:08.161426  196543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0110 02:18:08.174037  196543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:18:08.185627  196543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I0110 02:18:08.197990  196543 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:18:08.201462  196543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:18:08.308877  196543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:18:08.321490  196543 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591 for IP: 192.168.85.2
	I0110 02:18:08.321510  196543 certs.go:195] generating shared ca certs ...
	I0110 02:18:08.321528  196543 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:18:08.321685  196543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:18:08.321747  196543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:18:08.321770  196543 certs.go:257] generating profile certs ...
	I0110 02:18:08.321866  196543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.key
	I0110 02:18:08.321981  196543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/apiserver.key.b28a66ad
	I0110 02:18:08.322058  196543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/proxy-client.key
	I0110 02:18:08.322191  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:18:08.322231  196543 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:18:08.322246  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:18:08.322287  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:18:08.322322  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:18:08.322403  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:18:08.322467  196543 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:18:08.323155  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:18:08.341426  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:18:08.358769  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:18:08.375675  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:18:08.393157  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 02:18:08.409757  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:18:08.427105  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:18:08.443775  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:18:08.460339  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:18:08.476329  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:18:08.492587  196543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:18:08.508823  196543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:18:08.520512  196543 ssh_runner.go:195] Run: openssl version
	I0110 02:18:08.526352  196543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:18:08.533138  196543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:18:08.540102  196543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:18:08.543978  196543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:18:08.544023  196543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:18:08.578340  196543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:18:08.585473  196543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:18:08.592584  196543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:18:08.599581  196543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:18:08.602956  196543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:18:08.602994  196543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:18:08.637701  196543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:18:08.644821  196543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:18:08.652147  196543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:18:08.658902  196543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:18:08.662245  196543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:18:08.662285  196543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:18:08.695791  196543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:18:08.702923  196543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:18:08.706319  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:18:08.741298  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:18:08.775073  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:18:08.809268  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:18:08.845529  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:18:08.880764  196543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:18:08.915981  196543 kubeadm.go:401] StartCluster: {Name:pause-538591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-538591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:18:08.916119  196543 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:18:08.916196  196543 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:18:08.944588  196543 cri.go:96] found id: "0cd6b46141d3596075a2e35864f26a9594c5564c9062d010092d222220a53327"
	I0110 02:18:08.944609  196543 cri.go:96] found id: "79522e27c31af219993fe174e5a5620c061de48834cf7aa6d5ae97a3c3dad960"
	I0110 02:18:08.944612  196543 cri.go:96] found id: "fb80d4f2e5233ecb8291e4c8170a081344a456dc06680eb325439cb4143a57f6"
	I0110 02:18:08.944616  196543 cri.go:96] found id: "0c62c16a7490e6c973be02b02136b8349d9805422c1a764e4203f3fb440bf8f8"
	I0110 02:18:08.944619  196543 cri.go:96] found id: "cca61452faa7940827281bf294cac47345ecab8de8eea2746c864d19445165f8"
	I0110 02:18:08.944622  196543 cri.go:96] found id: "2fe6e76f66283939a29885bc06387085aef282e01e58f5573ff55899e4308598"
	I0110 02:18:08.944624  196543 cri.go:96] found id: "912bde97aebf1616ad9e4d63d56ffd6f1612b5c2adfa2f3a77dc4d260380497b"
	I0110 02:18:08.944627  196543 cri.go:96] found id: ""
	I0110 02:18:08.944675  196543 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:18:08.956130  196543 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:18:08Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:18:08.956199  196543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:18:08.963854  196543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:18:08.963875  196543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:18:08.963932  196543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:18:08.970991  196543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:18:08.971698  196543 kubeconfig.go:125] found "pause-538591" server: "https://192.168.85.2:8443"
	I0110 02:18:08.972763  196543 kapi.go:59] client config for pause-538591: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.crt", KeyFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.key", CAFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 02:18:08.973167  196543 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0110 02:18:08.973182  196543 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0110 02:18:08.973188  196543 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0110 02:18:08.973194  196543 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0110 02:18:08.973200  196543 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0110 02:18:08.973205  196543 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0110 02:18:08.973541  196543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:18:08.980912  196543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 02:18:08.980940  196543 kubeadm.go:602] duration metric: took 17.05937ms to restartPrimaryControlPlane
	I0110 02:18:08.980947  196543 kubeadm.go:403] duration metric: took 64.976546ms to StartCluster
	I0110 02:18:08.980959  196543 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:18:08.981012  196543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:18:08.982065  196543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:18:08.982274  196543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:18:08.982339  196543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:18:08.982566  196543 config.go:182] Loaded profile config "pause-538591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:18:08.984874  196543 out.go:179] * Verifying Kubernetes components...
	I0110 02:18:08.984875  196543 out.go:179] * Enabled addons: 
	I0110 02:18:08.277763  182176 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0110 02:18:08.277803  182176 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:18:08.985958  196543 addons.go:530] duration metric: took 3.623194ms for enable addons: enabled=[]
	I0110 02:18:08.985984  196543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:18:09.091045  196543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:18:09.103738  196543 node_ready.go:35] waiting up to 6m0s for node "pause-538591" to be "Ready" ...
	I0110 02:18:09.110995  196543 node_ready.go:49] node "pause-538591" is "Ready"
	I0110 02:18:09.111015  196543 node_ready.go:38] duration metric: took 7.246677ms for node "pause-538591" to be "Ready" ...
	I0110 02:18:09.111027  196543 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:18:09.111074  196543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:18:09.121923  196543 api_server.go:72] duration metric: took 139.625732ms to wait for apiserver process to appear ...
	I0110 02:18:09.121939  196543 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:18:09.121954  196543 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:18:09.126791  196543 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:18:09.127650  196543 api_server.go:141] control plane version: v1.35.0
	I0110 02:18:09.127677  196543 api_server.go:131] duration metric: took 5.731141ms to wait for apiserver health ...
	I0110 02:18:09.127686  196543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:18:09.130815  196543 system_pods.go:59] 7 kube-system pods found
	I0110 02:18:09.130844  196543 system_pods.go:61] "coredns-7d764666f9-r5f6q" [a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5] Running
	I0110 02:18:09.130851  196543 system_pods.go:61] "etcd-pause-538591" [ac5b5c4a-75f1-47a6-a54d-dcec6470d3ff] Running
	I0110 02:18:09.130857  196543 system_pods.go:61] "kindnet-d5gn6" [5a216eb7-0550-4d24-ba73-869400727115] Running
	I0110 02:18:09.130866  196543 system_pods.go:61] "kube-apiserver-pause-538591" [f93ce20d-576b-4015-8089-93c3fec476f4] Running
	I0110 02:18:09.130876  196543 system_pods.go:61] "kube-controller-manager-pause-538591" [0c7dd188-7c10-4049-86c4-2e171f514da3] Running
	I0110 02:18:09.130882  196543 system_pods.go:61] "kube-proxy-r9czs" [4e23fd55-d594-409b-bed8-74f9c7a7d159] Running
	I0110 02:18:09.130898  196543 system_pods.go:61] "kube-scheduler-pause-538591" [719a271a-e4a3-4437-8c23-5a69e3dec0c1] Running
	I0110 02:18:09.130903  196543 system_pods.go:74] duration metric: took 3.211573ms to wait for pod list to return data ...
	I0110 02:18:09.130911  196543 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:18:09.132597  196543 default_sa.go:45] found service account: "default"
	I0110 02:18:09.132616  196543 default_sa.go:55] duration metric: took 1.699216ms for default service account to be created ...
	I0110 02:18:09.132625  196543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:18:09.134920  196543 system_pods.go:86] 7 kube-system pods found
	I0110 02:18:09.134943  196543 system_pods.go:89] "coredns-7d764666f9-r5f6q" [a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5] Running
	I0110 02:18:09.134950  196543 system_pods.go:89] "etcd-pause-538591" [ac5b5c4a-75f1-47a6-a54d-dcec6470d3ff] Running
	I0110 02:18:09.134955  196543 system_pods.go:89] "kindnet-d5gn6" [5a216eb7-0550-4d24-ba73-869400727115] Running
	I0110 02:18:09.134959  196543 system_pods.go:89] "kube-apiserver-pause-538591" [f93ce20d-576b-4015-8089-93c3fec476f4] Running
	I0110 02:18:09.134963  196543 system_pods.go:89] "kube-controller-manager-pause-538591" [0c7dd188-7c10-4049-86c4-2e171f514da3] Running
	I0110 02:18:09.134966  196543 system_pods.go:89] "kube-proxy-r9czs" [4e23fd55-d594-409b-bed8-74f9c7a7d159] Running
	I0110 02:18:09.134970  196543 system_pods.go:89] "kube-scheduler-pause-538591" [719a271a-e4a3-4437-8c23-5a69e3dec0c1] Running
	I0110 02:18:09.134975  196543 system_pods.go:126] duration metric: took 2.345852ms to wait for k8s-apps to be running ...
	I0110 02:18:09.134983  196543 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:18:09.135015  196543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:18:09.147867  196543 system_svc.go:56] duration metric: took 12.876267ms WaitForService to wait for kubelet
	I0110 02:18:09.147901  196543 kubeadm.go:587] duration metric: took 165.603554ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:18:09.147922  196543 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:18:09.150184  196543 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:18:09.150204  196543 node_conditions.go:123] node cpu capacity is 8
	I0110 02:18:09.150216  196543 node_conditions.go:105] duration metric: took 2.288867ms to run NodePressure ...
	I0110 02:18:09.150224  196543 start.go:242] waiting for startup goroutines ...
	I0110 02:18:09.150231  196543 start.go:247] waiting for cluster config update ...
	I0110 02:18:09.150237  196543 start.go:256] writing updated cluster config ...
	I0110 02:18:09.150488  196543 ssh_runner.go:195] Run: rm -f paused
	I0110 02:18:09.153984  196543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:18:09.155012  196543 kapi.go:59] client config for pause-538591: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.crt", KeyFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/profiles/pause-538591/client.key", CAFile:"/home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 02:18:09.157211  196543 pod_ready.go:83] waiting for pod "coredns-7d764666f9-r5f6q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.160808  196543 pod_ready.go:94] pod "coredns-7d764666f9-r5f6q" is "Ready"
	I0110 02:18:09.160824  196543 pod_ready.go:86] duration metric: took 3.595771ms for pod "coredns-7d764666f9-r5f6q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.162388  196543 pod_ready.go:83] waiting for pod "etcd-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.165605  196543 pod_ready.go:94] pod "etcd-pause-538591" is "Ready"
	I0110 02:18:09.165620  196543 pod_ready.go:86] duration metric: took 3.215946ms for pod "etcd-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.167258  196543 pod_ready.go:83] waiting for pod "kube-apiserver-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.170361  196543 pod_ready.go:94] pod "kube-apiserver-pause-538591" is "Ready"
	I0110 02:18:09.170375  196543 pod_ready.go:86] duration metric: took 3.102577ms for pod "kube-apiserver-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.172038  196543 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.558432  196543 pod_ready.go:94] pod "kube-controller-manager-pause-538591" is "Ready"
	I0110 02:18:09.558461  196543 pod_ready.go:86] duration metric: took 386.406548ms for pod "kube-controller-manager-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:09.758260  196543 pod_ready.go:83] waiting for pod "kube-proxy-r9czs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:10.158148  196543 pod_ready.go:94] pod "kube-proxy-r9czs" is "Ready"
	I0110 02:18:10.158172  196543 pod_ready.go:86] duration metric: took 399.884687ms for pod "kube-proxy-r9czs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:10.357743  196543 pod_ready.go:83] waiting for pod "kube-scheduler-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:10.757950  196543 pod_ready.go:94] pod "kube-scheduler-pause-538591" is "Ready"
	I0110 02:18:10.757972  196543 pod_ready.go:86] duration metric: took 400.2062ms for pod "kube-scheduler-pause-538591" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:18:10.757983  196543 pod_ready.go:40] duration metric: took 1.603972285s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:18:10.800739  196543 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:18:10.802252  196543 out.go:179] * Done! kubectl is now configured to use "pause-538591" cluster and "default" namespace by default
	W0110 02:18:06.543542  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	W0110 02:18:08.543811  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	W0110 02:18:10.544092  188821 pod_ready.go:104] pod "coredns-7d764666f9-mvcjt" is not "Ready", error: <nil>
	I0110 02:18:12.960396  195868 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 7e8addb421f6b8dceecc99ab6c564d04e1229f89e4e14a433f888de09a131471 23e60fb805bf913c8de5c75674050edb56c2c2c1a0cb07f6ca7919ce9da4d635 1f02b4d027bb56e5757f5edece988137e189cbc8698338347517bcc363273b75 3a648d3ef5f4835c1c0c082f0389fe5b710dfcc48dc9a4b5e70e1aa6afa8fbbd: (17.867491096s)
	I0110 02:18:12.960461  195868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:18:12.975134  195868 out.go:179]   - Kubernetes: Stopped
	I0110 02:18:12.977471  195868 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:18:13.016291  195868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:18:13.020869  195868 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:18:13.020945  195868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:18:13.028952  195868 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:18:13.028970  195868 start.go:496] detecting cgroup driver to use...
	I0110 02:18:13.028995  195868 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:18:13.029038  195868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:18:13.043722  195868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:18:13.055691  195868 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:18:13.055739  195868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:18:13.069247  195868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:18:13.083576  195868 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:18:13.185188  195868 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:18:13.287528  195868 docker.go:234] disabling docker service ...
	I0110 02:18:13.287590  195868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:18:13.303224  195868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:18:13.318218  195868 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:18:13.408954  195868 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:18:13.521567  195868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:18:13.534363  195868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:18:13.549281  195868 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I0110 02:18:13.549327  195868 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0110 02:18:13.549368  195868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:13.558473  195868 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:18:13.558527  195868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:13.568017  195868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:13.576831  195868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:18:13.585519  195868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:18:13.593966  195868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:18:13.601603  195868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:18:13.609379  195868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:18:13.705861  195868 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:18:13.868831  195868 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:18:13.868920  195868 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:18:13.873625  195868 start.go:574] Will wait 60s for crictl version
	I0110 02:18:13.873692  195868 ssh_runner.go:195] Run: which crictl
	I0110 02:18:13.878469  195868 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:18:13.906810  195868 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:18:13.906911  195868 ssh_runner.go:195] Run: crio --version
	I0110 02:18:13.938017  195868 ssh_runner.go:195] Run: crio --version
	I0110 02:18:13.968373  195868 out.go:179] * Preparing CRI-O 1.35.0 ...
	I0110 02:18:13.969699  195868 ssh_runner.go:195] Run: rm -f paused
	I0110 02:18:13.974514  195868 out.go:179] * Done! minikube is ready without Kubernetes!
	I0110 02:18:13.975688  195868 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:18:13.279947  182176 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0110 02:18:13.279976  182176 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:18:13.486672  182176 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:42640->192.168.103.2:8443: read: connection reset by peer
	I0110 02:18:13.774026  182176 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:18:13.774408  182176 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I0110 02:18:14.273941  182176 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:18:14.274302  182176 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I0110 02:18:14.773910  182176 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:18:14.774297  182176 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.847703377Z" level=info msg="RDT not available in the host system"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.847717202Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.848495377Z" level=info msg="Conmon does support the --sync option"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.848511873Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.848523698Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.849235286Z" level=info msg="Conmon does support the --sync option"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.849248645Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.854203839Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.854222879Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.854703599Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"en
forcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [cri
o.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.855098787Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.855144049Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926180505Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-r5f6q Namespace:kube-system ID:70a74cb4ac8718c0f13cc4f89b977996bf21aa5cbd3dd79f5d0fee6b35e0771f UID:a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5 NetNS:/var/run/netns/b3a67363-7079-4356-a0f2-b34573c88545 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00007eb38}] Aliases:map[]}"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926373093Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-r5f6q for CNI network kindnet (type=ptp)"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926800898Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926828554Z" level=info msg="Starting seccomp notifier watcher"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.926933075Z" level=info msg="Create NRI interface"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.927520962Z" level=info msg="built-in NRI default validator is disabled"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.927561863Z" level=info msg="runtime interface created"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.92758639Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.92759486Z" level=info msg="runtime interface starting up..."
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.927603074Z" level=info msg="starting plugins..."
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.927626155Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 02:18:07 pause-538591 crio[2215]: time="2026-01-10T02:18:07.928305677Z" level=info msg="No systemd watchdog enabled"
	Jan 10 02:18:07 pause-538591 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0cd6b46141d35       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     12 seconds ago      Running             coredns                   0                   70a74cb4ac871       coredns-7d764666f9-r5f6q               kube-system
	79522e27c31af       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   24 seconds ago      Running             kindnet-cni               0                   49bf448dcc1c2       kindnet-d5gn6                          kube-system
	fb80d4f2e5233       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     25 seconds ago      Running             kube-proxy                0                   5e412cef28acb       kube-proxy-r9czs                       kube-system
	0c62c16a7490e       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     36 seconds ago      Running             kube-apiserver            0                   7a9626a5b91e6       kube-apiserver-pause-538591            kube-system
	cca61452faa79       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     36 seconds ago      Running             etcd                      0                   bffbb4929d539       etcd-pause-538591                      kube-system
	2fe6e76f66283       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     36 seconds ago      Running             kube-controller-manager   0                   214f664cb819f       kube-controller-manager-pause-538591   kube-system
	912bde97aebf1       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     36 seconds ago      Running             kube-scheduler            0                   02f390ee2e2d0       kube-scheduler-pause-538591            kube-system
	
	
	==> coredns [0cd6b46141d3596075a2e35864f26a9594c5564c9062d010092d222220a53327] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52056 - 8255 "HINFO IN 6304715311341725238.8675739646746251486. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064956627s
	
	
	==> describe nodes <==
	Name:               pause-538591
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-538591
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=pause-538591
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_17_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:17:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-538591
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:18:02 +0000   Sat, 10 Jan 2026 02:17:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:18:02 +0000   Sat, 10 Jan 2026 02:17:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:18:02 +0000   Sat, 10 Jan 2026 02:17:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:18:02 +0000   Sat, 10 Jan 2026 02:18:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-538591
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                dbe282f3-bb05-42f0-8b46-e291ddf73e29
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-r5f6q                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-538591                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-d5gn6                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-538591             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-538591    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-r9czs                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-538591             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node pause-538591 event: Registered Node pause-538591 in Controller
	
	
	==> dmesg <==
	[Jan10 01:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001880] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378214] i8042: Warning: Keylock active
	[  +0.012673] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498024] block sda: the capability attribute has been deprecated.
	[  +0.086955] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024715] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [cca61452faa7940827281bf294cac47345ecab8de8eea2746c864d19445165f8] <==
	{"level":"info","ts":"2026-01-10T02:17:39.508558Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:17:40.392040Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:17:40.392206Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:17:40.392271Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-10T02:17:40.392289Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:17:40.392309Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:17:40.392908Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:17:40.392935Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:17:40.392952Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:17:40.392962Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:17:40.393525Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:17:40.394068Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-538591 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:17:40.394143Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:17:40.394159Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:17:40.394371Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:17:40.394650Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:17:40.394703Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:17:40.395510Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:17:40.395630Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:17:40.395634Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:17:40.395759Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:17:40.394396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:17:40.395782Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:17:40.398790Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T02:17:40.399451Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:18:16 up  1:00,  0 user,  load average: 3.08, 2.06, 1.30
	Linux pause-538591 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [79522e27c31af219993fe174e5a5620c061de48834cf7aa6d5ae97a3c3dad960] <==
	I0110 02:17:52.113268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:17:52.113579       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:17:52.113724       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:17:52.113800       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:17:52.113849       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:17:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:17:52.317993       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:17:52.318032       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:17:52.318047       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:17:52.318836       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:17:52.711826       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:17:52.711921       1 metrics.go:72] Registering metrics
	I0110 02:17:52.749722       1 controller.go:711] "Syncing nftables rules"
	I0110 02:18:02.318596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:18:02.318684       1 main.go:301] handling current node
	I0110 02:18:12.325706       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:18:12.325739       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c62c16a7490e6c973be02b02136b8349d9805422c1a764e4203f3fb440bf8f8] <==
	I0110 02:17:41.676962       1 policy_source.go:248] refreshing policies
	E0110 02:17:41.677083       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0110 02:17:41.703636       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 02:17:41.751606       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:17:41.759755       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:17:41.759826       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:17:41.780388       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:17:41.879393       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:17:42.602750       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:17:42.680483       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:17:42.680502       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:17:43.301297       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:17:43.338960       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:17:43.456562       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:17:43.462513       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0110 02:17:43.463684       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:17:43.467778       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:17:43.615789       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:17:44.449564       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:17:44.458039       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:17:44.465424       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:17:49.168461       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:17:49.220093       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:17:49.224046       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:17:49.620822       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [2fe6e76f66283939a29885bc06387085aef282e01e58f5573ff55899e4308598] <==
	I0110 02:17:48.418658       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419684       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419730       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419747       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419757       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419877       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.419943       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420053       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420180       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420221       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420237       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420315       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420324       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420522       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.420325       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.425975       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.426051       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.426081       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.431738       1 range_allocator.go:433] "Set node PodCIDR" node="pause-538591" podCIDRs=["10.244.0.0/24"]
	I0110 02:17:48.435705       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:17:48.520505       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:48.520522       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:17:48.520526       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:17:48.536416       1 shared_informer.go:377] "Caches are synced"
	I0110 02:18:03.421121       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [fb80d4f2e5233ecb8291e4c8170a081344a456dc06680eb325439cb4143a57f6] <==
	I0110 02:17:50.021503       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:17:50.089965       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:17:50.190657       1 shared_informer.go:377] "Caches are synced"
	I0110 02:17:50.190698       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 02:17:50.190800       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:17:50.208830       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:17:50.208922       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:17:50.213869       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:17:50.214201       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:17:50.214218       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:17:50.215591       1 config.go:200] "Starting service config controller"
	I0110 02:17:50.215621       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:17:50.215627       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:17:50.215654       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:17:50.215696       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:17:50.215728       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:17:50.215708       1 config.go:309] "Starting node config controller"
	I0110 02:17:50.215772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:17:50.215779       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:17:50.316222       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:17:50.316246       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:17:50.316340       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [912bde97aebf1616ad9e4d63d56ffd6f1612b5c2adfa2f3a77dc4d260380497b] <==
	E0110 02:17:41.636461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:17:41.636498       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:17:41.636582       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:17:41.636798       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:17:41.636787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:17:42.491980       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:17:42.518058       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:17:42.624608       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:17:42.641656       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:17:42.643357       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:17:42.647778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:17:42.649117       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:17:42.749577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:17:42.768942       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:17:42.793028       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:17:42.820183       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:17:42.837500       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:17:42.892726       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:17:42.915238       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:17:42.966996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:17:43.000662       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 02:17:43.029910       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:17:43.063007       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:17:43.130950       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I0110 02:17:46.026475       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.691881    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a216eb7-0550-4d24-ba73-869400727115-lib-modules\") pod \"kindnet-d5gn6\" (UID: \"5a216eb7-0550-4d24-ba73-869400727115\") " pod="kube-system/kindnet-d5gn6"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.691938    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a216eb7-0550-4d24-ba73-869400727115-xtables-lock\") pod \"kindnet-d5gn6\" (UID: \"5a216eb7-0550-4d24-ba73-869400727115\") " pod="kube-system/kindnet-d5gn6"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.691993    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e23fd55-d594-409b-bed8-74f9c7a7d159-kube-proxy\") pod \"kube-proxy-r9czs\" (UID: \"4e23fd55-d594-409b-bed8-74f9c7a7d159\") " pod="kube-system/kube-proxy-r9czs"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.692064    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e23fd55-d594-409b-bed8-74f9c7a7d159-xtables-lock\") pod \"kube-proxy-r9czs\" (UID: \"4e23fd55-d594-409b-bed8-74f9c7a7d159\") " pod="kube-system/kube-proxy-r9czs"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.692140    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tkwm\" (UniqueName: \"kubernetes.io/projected/5a216eb7-0550-4d24-ba73-869400727115-kube-api-access-4tkwm\") pod \"kindnet-d5gn6\" (UID: \"5a216eb7-0550-4d24-ba73-869400727115\") " pod="kube-system/kindnet-d5gn6"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.692178    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e23fd55-d594-409b-bed8-74f9c7a7d159-lib-modules\") pod \"kube-proxy-r9czs\" (UID: \"4e23fd55-d594-409b-bed8-74f9c7a7d159\") " pod="kube-system/kube-proxy-r9czs"
	Jan 10 02:17:49 pause-538591 kubelet[1301]: I0110 02:17:49.692207    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lm5\" (UniqueName: \"kubernetes.io/projected/4e23fd55-d594-409b-bed8-74f9c7a7d159-kube-api-access-l7lm5\") pod \"kube-proxy-r9czs\" (UID: \"4e23fd55-d594-409b-bed8-74f9c7a7d159\") " pod="kube-system/kube-proxy-r9czs"
	Jan 10 02:17:50 pause-538591 kubelet[1301]: I0110 02:17:50.304558    1301 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-r9czs" podStartSLOduration=1.304536874 podStartE2EDuration="1.304536874s" podCreationTimestamp="2026-01-10 02:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:17:50.304510653 +0000 UTC m=+6.124533517" watchObservedRunningTime="2026-01-10 02:17:50.304536874 +0000 UTC m=+6.124559739"
	Jan 10 02:17:52 pause-538591 kubelet[1301]: E0110 02:17:52.222445    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-538591" containerName="kube-scheduler"
	Jan 10 02:17:52 pause-538591 kubelet[1301]: I0110 02:17:52.309560    1301 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-d5gn6" podStartSLOduration=1.467899787 podStartE2EDuration="3.309542333s" podCreationTimestamp="2026-01-10 02:17:49 +0000 UTC" firstStartedPulling="2026-01-10 02:17:49.952033173 +0000 UTC m=+5.772056032" lastFinishedPulling="2026-01-10 02:17:51.79367572 +0000 UTC m=+7.613698578" observedRunningTime="2026-01-10 02:17:52.309282766 +0000 UTC m=+8.129305631" watchObservedRunningTime="2026-01-10 02:17:52.309542333 +0000 UTC m=+8.129565198"
	Jan 10 02:17:54 pause-538591 kubelet[1301]: E0110 02:17:54.821570    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-538591" containerName="kube-controller-manager"
	Jan 10 02:17:56 pause-538591 kubelet[1301]: E0110 02:17:56.614613    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-538591" containerName="kube-apiserver"
	Jan 10 02:17:59 pause-538591 kubelet[1301]: E0110 02:17:59.604765    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-538591" containerName="etcd"
	Jan 10 02:18:02 pause-538591 kubelet[1301]: E0110 02:18:02.227392    1301 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-538591" containerName="kube-scheduler"
	Jan 10 02:18:02 pause-538591 kubelet[1301]: I0110 02:18:02.766929    1301 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 02:18:02 pause-538591 kubelet[1301]: I0110 02:18:02.894401    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5-config-volume\") pod \"coredns-7d764666f9-r5f6q\" (UID: \"a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5\") " pod="kube-system/coredns-7d764666f9-r5f6q"
	Jan 10 02:18:02 pause-538591 kubelet[1301]: I0110 02:18:02.894453    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bmjc\" (UniqueName: \"kubernetes.io/projected/a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5-kube-api-access-4bmjc\") pod \"coredns-7d764666f9-r5f6q\" (UID: \"a6ec9ba4-1da5-4a38-b3b2-8883482fc4c5\") " pod="kube-system/coredns-7d764666f9-r5f6q"
	Jan 10 02:18:03 pause-538591 kubelet[1301]: E0110 02:18:03.322689    1301 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r5f6q" containerName="coredns"
	Jan 10 02:18:03 pause-538591 kubelet[1301]: I0110 02:18:03.332733    1301 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-r5f6q" podStartSLOduration=14.332716919 podStartE2EDuration="14.332716919s" podCreationTimestamp="2026-01-10 02:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:18:03.332544263 +0000 UTC m=+19.152567323" watchObservedRunningTime="2026-01-10 02:18:03.332716919 +0000 UTC m=+19.152739784"
	Jan 10 02:18:04 pause-538591 kubelet[1301]: E0110 02:18:04.324332    1301 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r5f6q" containerName="coredns"
	Jan 10 02:18:05 pause-538591 kubelet[1301]: E0110 02:18:05.326507    1301 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r5f6q" containerName="coredns"
	Jan 10 02:18:11 pause-538591 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:18:11 pause-538591 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:18:11 pause-538591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:18:11 pause-538591 systemd[1]: kubelet.service: Consumed 1.178s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-538591 -n pause-538591
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-538591 -n pause-538591: exit status 2 (356.721883ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-538591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.856086ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:25:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-188604 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-188604 describe deploy/metrics-server -n kube-system: exit status 1 (66.640929ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-188604 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-188604
helpers_test.go:244: (dbg) docker inspect old-k8s-version-188604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339",
	        "Created": "2026-01-10T02:24:20.28221194Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302377,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:24:20.317834413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/hostname",
	        "HostsPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/hosts",
	        "LogPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339-json.log",
	        "Name": "/old-k8s-version-188604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-188604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-188604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339",
	                "LowerDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-188604",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-188604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-188604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-188604",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-188604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "110c33ac342ea879d9b041c043ff618722528f8215aa99db8dcdb602b885c1f1",
	            "SandboxKey": "/var/run/docker/netns/110c33ac342e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-188604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5bb0788a00cda98d7846d64fc1fe66eb98afdb7fd381de926d036feac84ba741",
	                    "EndpointID": "4568582be57b1c112a801e92d84275a392789df4eae0e2358aa6c2074b79a3b0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "fa:ed:3e:7a:4a:20",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-188604",
	                        "d326f6fd278c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188604 -n old-k8s-version-188604
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-188604 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-188604 logs -n 25: (1.002095704s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-647049 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo docker system info                                                                                                                                 │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cri-dockerd --version                                                                                                                              │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo containerd config dump                                                                                                                             │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo crio config                                                                                                                                        │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ delete  │ -p bridge-647049                                                                                                                                                         │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:25 UTC │
	│ delete  │ -p disable-driver-mounts-249405                                                                                                                                          │ disable-driver-mounts-249405 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:25:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:25:00.392194  317309 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:25:00.392279  317309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:25:00.392283  317309 out.go:374] Setting ErrFile to fd 2...
	I0110 02:25:00.392287  317309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:25:00.392477  317309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:25:00.392976  317309 out.go:368] Setting JSON to false
	I0110 02:25:00.394269  317309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4049,"bootTime":1768007851,"procs":457,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:25:00.394317  317309 start.go:143] virtualization: kvm guest
	I0110 02:25:00.396627  317309 out.go:179] * [default-k8s-diff-port-313784] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:25:00.397957  317309 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:25:00.397970  317309 notify.go:221] Checking for updates...
	I0110 02:25:00.400242  317309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:25:00.401485  317309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:25:00.402704  317309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:25:00.406317  317309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:25:00.407356  317309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:25:00.409141  317309 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:00.409280  317309 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:00.409426  317309 config.go:182] Loaded profile config "old-k8s-version-188604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:25:00.409539  317309 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:25:00.434761  317309 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:25:00.434854  317309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:25:00.493739  317309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-10 02:25:00.484502123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:25:00.493836  317309 docker.go:319] overlay module found
	I0110 02:25:00.495405  317309 out.go:179] * Using the docker driver based on user configuration
	I0110 02:25:00.496680  317309 start.go:309] selected driver: docker
	I0110 02:25:00.496709  317309 start.go:928] validating driver "docker" against <nil>
	I0110 02:25:00.496736  317309 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:25:00.497259  317309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:25:00.556702  317309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-10 02:25:00.547232481 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:25:00.556869  317309 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:25:00.557101  317309 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:00.558714  317309 out.go:179] * Using Docker driver with root privileges
	I0110 02:25:00.559916  317309 cni.go:84] Creating CNI manager for ""
	I0110 02:25:00.559994  317309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:00.560008  317309 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:25:00.560077  317309 start.go:353] cluster config:
	{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:00.561374  317309 out.go:179] * Starting "default-k8s-diff-port-313784" primary control-plane node in "default-k8s-diff-port-313784" cluster
	I0110 02:25:00.562518  317309 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:25:00.563663  317309 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:25:00.564638  317309 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:00.564665  317309 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:25:00.564674  317309 cache.go:65] Caching tarball of preloaded images
	I0110 02:25:00.564733  317309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:25:00.564744  317309 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:25:00.564755  317309 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:25:00.564846  317309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:25:00.564875  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json: {Name:mke69fde4131df0a8ccfd9b1b2b8ce80d8f28b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:00.585110  317309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:25:00.585128  317309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:25:00.585142  317309 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:25:00.585165  317309 start.go:360] acquireMachinesLock for default-k8s-diff-port-313784: {Name:mk0116f4190c69f6825824fe0766dd2c4c328e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:00.585241  317309 start.go:364] duration metric: took 62.883µs to acquireMachinesLock for "default-k8s-diff-port-313784"
	I0110 02:25:00.585269  317309 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:25:00.585318  317309 start.go:125] createHost starting for "" (driver="docker")
	W0110 02:24:56.474826  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:24:58.973310  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:24:57.293595  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:24:59.793590  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:00.318770  298671 node_ready.go:57] node "old-k8s-version-188604" has "Ready":"False" status (will retry)
	I0110 02:25:02.319491  298671 node_ready.go:49] node "old-k8s-version-188604" is "Ready"
	I0110 02:25:02.319522  298671 node_ready.go:38] duration metric: took 13.504308579s for node "old-k8s-version-188604" to be "Ready" ...
	I0110 02:25:02.319539  298671 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:02.319592  298671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:02.333933  298671 api_server.go:72] duration metric: took 14.03338025s to wait for apiserver process to appear ...
	I0110 02:25:02.333964  298671 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:02.333988  298671 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:25:02.340732  298671 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:25:02.341897  298671 api_server.go:141] control plane version: v1.28.0
	I0110 02:25:02.341923  298671 api_server.go:131] duration metric: took 7.952397ms to wait for apiserver health ...
	I0110 02:25:02.341931  298671 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:02.345474  298671 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:02.345511  298671 system_pods.go:61] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.345517  298671 system_pods.go:61] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.345522  298671 system_pods.go:61] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.345528  298671 system_pods.go:61] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.345535  298671 system_pods.go:61] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.345538  298671 system_pods.go:61] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.345541  298671 system_pods.go:61] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.345546  298671 system_pods.go:61] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.345553  298671 system_pods.go:74] duration metric: took 3.616799ms to wait for pod list to return data ...
	I0110 02:25:02.345561  298671 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:02.347940  298671 default_sa.go:45] found service account: "default"
	I0110 02:25:02.347962  298671 default_sa.go:55] duration metric: took 2.394187ms for default service account to be created ...
	I0110 02:25:02.347972  298671 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:02.351378  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.351406  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.351414  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.351422  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.351428  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.351434  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.351439  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.351445  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.351454  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.351482  298671 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:02.552922  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.552955  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.552963  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.552971  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.552975  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.552979  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.552983  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.552994  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.553002  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.856387  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.856413  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Running
	I0110 02:25:02.856419  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.856422  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.856426  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.856430  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.856435  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.856440  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.856445  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Running
	I0110 02:25:02.856454  298671 system_pods.go:126] duration metric: took 508.475351ms to wait for k8s-apps to be running ...
	I0110 02:25:02.856475  298671 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:02.856532  298671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:02.869806  298671 system_svc.go:56] duration metric: took 13.330594ms WaitForService to wait for kubelet
	I0110 02:25:02.869835  298671 kubeadm.go:587] duration metric: took 14.569287464s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:02.869850  298671 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:02.872557  298671 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:02.872579  298671 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:02.872592  298671 node_conditions.go:105] duration metric: took 2.737302ms to run NodePressure ...
	I0110 02:25:02.872603  298671 start.go:242] waiting for startup goroutines ...
	I0110 02:25:02.872610  298671 start.go:247] waiting for cluster config update ...
	I0110 02:25:02.872619  298671 start.go:256] writing updated cluster config ...
	I0110 02:25:02.872932  298671 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:02.876611  298671 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:02.881253  298671 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.885713  298671 pod_ready.go:94] pod "coredns-5dd5756b68-vc68c" is "Ready"
	I0110 02:25:02.885731  298671 pod_ready.go:86] duration metric: took 4.45863ms for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.889064  298671 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.893600  298671 pod_ready.go:94] pod "etcd-old-k8s-version-188604" is "Ready"
	I0110 02:25:02.893623  298671 pod_ready.go:86] duration metric: took 4.538704ms for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.895829  298671 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.899591  298671 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-188604" is "Ready"
	I0110 02:25:02.899611  298671 pod_ready.go:86] duration metric: took 3.76343ms for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.902330  298671 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.397653  298671 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-188604" is "Ready"
	I0110 02:25:03.397681  298671 pod_ready.go:86] duration metric: took 495.334365ms for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.681351  298671 pod_ready.go:83] waiting for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.881037  298671 pod_ready.go:94] pod "kube-proxy-c445q" is "Ready"
	I0110 02:25:03.881067  298671 pod_ready.go:86] duration metric: took 199.676144ms for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.081651  298671 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.481407  298671 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-188604" is "Ready"
	I0110 02:25:04.481434  298671 pod_ready.go:86] duration metric: took 399.75895ms for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.481446  298671 pod_ready.go:40] duration metric: took 1.604804736s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:04.526151  298671 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I0110 02:25:04.588428  298671 out.go:203] 
	W0110 02:25:04.608947  298671 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 02:25:04.621561  298671 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:25:04.623417  298671 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-188604" cluster and "default" namespace by default
	I0110 02:25:00.586927  317309 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:25:00.587148  317309 start.go:159] libmachine.API.Create for "default-k8s-diff-port-313784" (driver="docker")
	I0110 02:25:00.587180  317309 client.go:173] LocalClient.Create starting
	I0110 02:25:00.587290  317309 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 02:25:00.587336  317309 main.go:144] libmachine: Decoding PEM data...
	I0110 02:25:00.587362  317309 main.go:144] libmachine: Parsing certificate...
	I0110 02:25:00.587425  317309 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 02:25:00.587453  317309 main.go:144] libmachine: Decoding PEM data...
	I0110 02:25:00.587477  317309 main.go:144] libmachine: Parsing certificate...
	I0110 02:25:00.587876  317309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:25:00.603713  317309 cli_runner.go:211] docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:25:00.603783  317309 network_create.go:284] running [docker network inspect default-k8s-diff-port-313784] to gather additional debugging logs...
	I0110 02:25:00.603799  317309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784
	W0110 02:25:00.619531  317309 cli_runner.go:211] docker network inspect default-k8s-diff-port-313784 returned with exit code 1
	I0110 02:25:00.619561  317309 network_create.go:287] error running [docker network inspect default-k8s-diff-port-313784]: docker network inspect default-k8s-diff-port-313784: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-313784 not found
	I0110 02:25:00.619592  317309 network_create.go:289] output of [docker network inspect default-k8s-diff-port-313784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-313784 not found
	
	** /stderr **
	I0110 02:25:00.619686  317309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:25:00.636089  317309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
	I0110 02:25:00.636919  317309 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6b93c57cdce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:4c:65:68:38:06} reservation:<nil>}
	I0110 02:25:00.637882  317309 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c494a40b219 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:38:5d:78:96:da} reservation:<nil>}
	I0110 02:25:00.638718  317309 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a77220e3dd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:16:c1:44:08:5d} reservation:<nil>}
	I0110 02:25:00.639454  317309 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5bb0788a00cd IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:07:16:ea:24:2b} reservation:<nil>}
	I0110 02:25:00.640360  317309 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f6fef0}
	I0110 02:25:00.640387  317309 network_create.go:124] attempt to create docker network default-k8s-diff-port-313784 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0110 02:25:00.640422  317309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 default-k8s-diff-port-313784
	I0110 02:25:00.689478  317309 network_create.go:108] docker network default-k8s-diff-port-313784 192.168.94.0/24 created
	I0110 02:25:00.689512  317309 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-313784" container
	I0110 02:25:00.689566  317309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:25:00.706880  317309 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-313784 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:25:00.724154  317309 oci.go:103] Successfully created a docker volume default-k8s-diff-port-313784
	I0110 02:25:00.724237  317309 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-313784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --entrypoint /usr/bin/test -v default-k8s-diff-port-313784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:25:01.125000  317309 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-313784
	I0110 02:25:01.125098  317309 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:01.125127  317309 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:25:01.125186  317309 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-313784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:25:05.011931  317309 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-313784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.886674134s)
	I0110 02:25:05.011970  317309 kic.go:203] duration metric: took 3.886836272s to extract preloaded images to volume ...
	W0110 02:25:05.012038  317309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 02:25:05.012067  317309 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 02:25:05.012103  317309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:25:05.077023  317309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-313784 --name default-k8s-diff-port-313784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --network default-k8s-diff-port-313784 --ip 192.168.94.2 --volume default-k8s-diff-port-313784:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:25:05.356393  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Running}}
	I0110 02:25:05.378292  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	W0110 02:25:00.975628  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:25:03.473338  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	I0110 02:25:05.473260  306368 node_ready.go:49] node "embed-certs-872415" is "Ready"
	I0110 02:25:05.473290  306368 node_ready.go:38] duration metric: took 13.003254802s for node "embed-certs-872415" to be "Ready" ...
	I0110 02:25:05.473307  306368 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:05.473367  306368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:05.487970  306368 api_server.go:72] duration metric: took 13.381037629s to wait for apiserver process to appear ...
	I0110 02:25:05.487997  306368 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:05.488020  306368 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:25:05.492948  306368 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0110 02:25:05.493802  306368 api_server.go:141] control plane version: v1.35.0
	I0110 02:25:05.493822  306368 api_server.go:131] duration metric: took 5.818404ms to wait for apiserver health ...
	I0110 02:25:05.493830  306368 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:05.497635  306368 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:05.497667  306368 system_pods.go:61] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.497675  306368 system_pods.go:61] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.497684  306368 system_pods.go:61] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.497693  306368 system_pods.go:61] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.497698  306368 system_pods.go:61] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.497703  306368 system_pods.go:61] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.497714  306368 system_pods.go:61] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.497724  306368 system_pods.go:61] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:05.497734  306368 system_pods.go:74] duration metric: took 3.897808ms to wait for pod list to return data ...
	I0110 02:25:05.497754  306368 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:05.500151  306368 default_sa.go:45] found service account: "default"
	I0110 02:25:05.500168  306368 default_sa.go:55] duration metric: took 2.404258ms for default service account to be created ...
	I0110 02:25:05.500178  306368 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:05.502707  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:05.502735  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.502743  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.502763  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.502772  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.502779  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.502788  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.502801  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.502812  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:05.502851  306368 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:05.732008  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:05.732045  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.732052  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.732058  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.732061  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.732065  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.732068  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.732077  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.732135  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	W0110 02:25:01.793762  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:03.803678  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:06.292883  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	I0110 02:25:06.048077  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:06.048106  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:06.048114  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:06.048120  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:06.048124  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:06.048128  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:06.048131  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:06.048136  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:06.048146  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:06.048157  306368 system_pods.go:126] duration metric: took 547.973131ms to wait for k8s-apps to be running ...
	I0110 02:25:06.048165  306368 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:06.048209  306368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:06.064116  306368 system_svc.go:56] duration metric: took 15.940126ms WaitForService to wait for kubelet
	I0110 02:25:06.064146  306368 kubeadm.go:587] duration metric: took 13.957218617s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:06.064170  306368 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:06.066699  306368 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:06.066724  306368 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:06.066737  306368 node_conditions.go:105] duration metric: took 2.561883ms to run NodePressure ...
	I0110 02:25:06.066749  306368 start.go:242] waiting for startup goroutines ...
	I0110 02:25:06.066755  306368 start.go:247] waiting for cluster config update ...
	I0110 02:25:06.066765  306368 start.go:256] writing updated cluster config ...
	I0110 02:25:06.067037  306368 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:06.071779  306368 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:06.075168  306368 pod_ready.go:83] waiting for pod "coredns-7d764666f9-lfdgm" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.079050  306368 pod_ready.go:94] pod "coredns-7d764666f9-lfdgm" is "Ready"
	I0110 02:25:06.079072  306368 pod_ready.go:86] duration metric: took 3.879968ms for pod "coredns-7d764666f9-lfdgm" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.081862  306368 pod_ready.go:83] waiting for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.085524  306368 pod_ready.go:94] pod "etcd-embed-certs-872415" is "Ready"
	I0110 02:25:06.085544  306368 pod_ready.go:86] duration metric: took 3.660409ms for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.176936  306368 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.181365  306368 pod_ready.go:94] pod "kube-apiserver-embed-certs-872415" is "Ready"
	I0110 02:25:06.181390  306368 pod_ready.go:86] duration metric: took 4.431195ms for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.183390  306368 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.476403  306368 pod_ready.go:94] pod "kube-controller-manager-embed-certs-872415" is "Ready"
	I0110 02:25:06.476433  306368 pod_ready.go:86] duration metric: took 293.020723ms for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.676583  306368 pod_ready.go:83] waiting for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.075937  306368 pod_ready.go:94] pod "kube-proxy-47n8d" is "Ready"
	I0110 02:25:07.075962  306368 pod_ready.go:86] duration metric: took 399.357725ms for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.275762  306368 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.676288  306368 pod_ready.go:94] pod "kube-scheduler-embed-certs-872415" is "Ready"
	I0110 02:25:07.676321  306368 pod_ready.go:86] duration metric: took 400.536667ms for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.676336  306368 pod_ready.go:40] duration metric: took 1.604527147s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:07.719956  306368 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:25:07.721934  306368 out.go:179] * Done! kubectl is now configured to use "embed-certs-872415" cluster and "default" namespace by default
	I0110 02:25:05.401124  317309 cli_runner.go:164] Run: docker exec default-k8s-diff-port-313784 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:25:05.461291  317309 oci.go:144] the created container "default-k8s-diff-port-313784" has a running status.
	I0110 02:25:05.461325  317309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa...
	I0110 02:25:05.527181  317309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:25:05.553440  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:25:05.571338  317309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:25:05.571363  317309 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-313784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:25:05.625350  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:25:05.643845  317309 machine.go:94] provisionDockerMachine start ...
	I0110 02:25:05.643986  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:05.668140  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:05.668505  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:05.668526  317309 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:25:05.669701  317309 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53044->127.0.0.1:33105: read: connection reset by peer
	I0110 02:25:08.804919  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:25:08.804953  317309 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-313784"
	I0110 02:25:08.805029  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:08.830536  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:08.830857  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:08.830897  317309 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313784 && echo "default-k8s-diff-port-313784" | sudo tee /etc/hostname
	I0110 02:25:08.970898  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:25:08.970999  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:08.989931  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:08.990192  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:08.990221  317309 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313784/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:25:09.119431  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:25:09.119464  317309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:25:09.119509  317309 ubuntu.go:190] setting up certificates
	I0110 02:25:09.119530  317309 provision.go:84] configureAuth start
	I0110 02:25:09.119597  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.137764  317309 provision.go:143] copyHostCerts
	I0110 02:25:09.137826  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:25:09.137839  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:25:09.137920  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:25:09.138022  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:25:09.138036  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:25:09.138076  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:25:09.138167  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:25:09.138178  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:25:09.138216  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:25:09.138278  317309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313784 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-313784 localhost minikube]
	I0110 02:25:09.239098  317309 provision.go:177] copyRemoteCerts
	I0110 02:25:09.239146  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:25:09.239181  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.257663  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.351428  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:25:09.372575  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 02:25:09.389338  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:25:09.406058  317309 provision.go:87] duration metric: took 286.505777ms to configureAuth
	I0110 02:25:09.406083  317309 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:25:09.406234  317309 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:09.406322  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.424190  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:09.424409  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:09.424431  317309 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:25:09.693344  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:25:09.693366  317309 machine.go:97] duration metric: took 4.049489301s to provisionDockerMachine
	I0110 02:25:09.693376  317309 client.go:176] duration metric: took 9.106187333s to LocalClient.Create
	I0110 02:25:09.693391  317309 start.go:167] duration metric: took 9.106245033s to libmachine.API.Create "default-k8s-diff-port-313784"
	I0110 02:25:09.693398  317309 start.go:293] postStartSetup for "default-k8s-diff-port-313784" (driver="docker")
	I0110 02:25:09.693406  317309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:25:09.693467  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:25:09.693512  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.711811  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.808337  317309 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:25:09.811710  317309 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:25:09.811743  317309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:25:09.811753  317309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:25:09.811808  317309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:25:09.811948  317309 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:25:09.812067  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:25:09.819116  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:09.838594  317309 start.go:296] duration metric: took 145.186868ms for postStartSetup
	I0110 02:25:09.838928  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.859826  317309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:25:09.860096  317309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:25:09.860157  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.879278  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.969286  317309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:25:09.973716  317309 start.go:128] duration metric: took 9.388381319s to createHost
	I0110 02:25:09.973739  317309 start.go:83] releasing machines lock for "default-k8s-diff-port-313784", held for 9.388488398s
	I0110 02:25:09.973810  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.991687  317309 ssh_runner.go:195] Run: cat /version.json
	I0110 02:25:09.991749  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.991769  317309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:25:09.991838  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:10.011173  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:10.011613  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:10.164201  317309 ssh_runner.go:195] Run: systemctl --version
	I0110 02:25:10.170999  317309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:25:10.203430  317309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:25:10.207806  317309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:25:10.207865  317309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:25:10.232145  317309 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0110 02:25:10.232171  317309 start.go:496] detecting cgroup driver to use...
	I0110 02:25:10.232201  317309 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:25:10.232258  317309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:25:10.248036  317309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:25:10.259670  317309 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:25:10.259715  317309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:25:10.274091  317309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:25:10.290721  317309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:25:10.373029  317309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:25:10.457304  317309 docker.go:234] disabling docker service ...
	I0110 02:25:10.457371  317309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:25:10.476542  317309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:25:10.488631  317309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:25:10.575393  317309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:25:10.659386  317309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:25:10.671564  317309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:25:10.685227  317309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:25:10.685293  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.694896  317309 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:25:10.694952  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.703294  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.711254  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.720324  317309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:25:10.728042  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.736233  317309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.750149  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.758257  317309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:25:10.765513  317309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:25:10.772256  317309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:10.856728  317309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:25:11.009116  317309 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:25:11.009174  317309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:25:11.012991  317309 start.go:574] Will wait 60s for crictl version
	I0110 02:25:11.013051  317309 ssh_runner.go:195] Run: which crictl
	I0110 02:25:11.016382  317309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:25:11.040332  317309 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:25:11.040407  317309 ssh_runner.go:195] Run: crio --version
	I0110 02:25:11.068000  317309 ssh_runner.go:195] Run: crio --version
	I0110 02:25:11.096409  317309 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:25:07.793746  303444 node_ready.go:49] node "no-preload-190877" is "Ready"
	I0110 02:25:07.793778  303444 node_ready.go:38] duration metric: took 12.503505454s for node "no-preload-190877" to be "Ready" ...
	I0110 02:25:07.793798  303444 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:07.793839  303444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:07.807490  303444 api_server.go:72] duration metric: took 12.881857064s to wait for apiserver process to appear ...
	I0110 02:25:07.807521  303444 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:07.807542  303444 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:25:07.812838  303444 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:25:07.813832  303444 api_server.go:141] control plane version: v1.35.0
	I0110 02:25:07.813857  303444 api_server.go:131] duration metric: took 6.328629ms to wait for apiserver health ...
	I0110 02:25:07.813865  303444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:07.817658  303444 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:07.817700  303444 system_pods.go:61] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:07.817708  303444 system_pods.go:61] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:07.817717  303444 system_pods.go:61] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:07.817723  303444 system_pods.go:61] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:07.817735  303444 system_pods.go:61] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:07.817740  303444 system_pods.go:61] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:07.817751  303444 system_pods.go:61] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:07.817757  303444 system_pods.go:61] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:07.817766  303444 system_pods.go:74] duration metric: took 3.894183ms to wait for pod list to return data ...
	I0110 02:25:07.817781  303444 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:07.820368  303444 default_sa.go:45] found service account: "default"
	I0110 02:25:07.820384  303444 default_sa.go:55] duration metric: took 2.597024ms for default service account to be created ...
	I0110 02:25:07.820392  303444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:07.823093  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:07.823127  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:07.823135  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:07.823144  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:07.823150  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:07.823160  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:07.823165  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:07.823171  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:07.823178  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:07.823205  303444 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:08.059368  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.059401  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.059407  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.059414  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.059418  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.059427  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.059435  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.059442  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.059452  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:08.434364  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.434398  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.434403  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.434408  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.434412  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.434418  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.434422  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.434432  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.434437  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:08.777704  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.777740  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.777749  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.777758  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.777764  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.777773  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.777780  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.777786  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.777797  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:09.321874  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:09.321923  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Running
	I0110 02:25:09.321932  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:09.321937  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:09.321941  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:09.321948  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:09.321953  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:09.321957  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:09.321960  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Running
	I0110 02:25:09.321967  303444 system_pods.go:126] duration metric: took 1.501570018s to wait for k8s-apps to be running ...
	I0110 02:25:09.321974  303444 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:09.322014  303444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:09.334169  303444 system_svc.go:56] duration metric: took 12.189212ms WaitForService to wait for kubelet
	I0110 02:25:09.334193  303444 kubeadm.go:587] duration metric: took 14.408567523s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:09.334210  303444 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:09.336739  303444 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:09.336760  303444 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:09.336772  303444 node_conditions.go:105] duration metric: took 2.556902ms to run NodePressure ...
	I0110 02:25:09.336780  303444 start.go:242] waiting for startup goroutines ...
	I0110 02:25:09.336787  303444 start.go:247] waiting for cluster config update ...
	I0110 02:25:09.336806  303444 start.go:256] writing updated cluster config ...
	I0110 02:25:09.337056  303444 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:09.340676  303444 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:09.343942  303444 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xrkw6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.347761  303444 pod_ready.go:94] pod "coredns-7d764666f9-xrkw6" is "Ready"
	I0110 02:25:09.347777  303444 pod_ready.go:86] duration metric: took 3.816158ms for pod "coredns-7d764666f9-xrkw6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.349481  303444 pod_ready.go:83] waiting for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.352998  303444 pod_ready.go:94] pod "etcd-no-preload-190877" is "Ready"
	I0110 02:25:09.353017  303444 pod_ready.go:86] duration metric: took 3.510866ms for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.354760  303444 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.358139  303444 pod_ready.go:94] pod "kube-apiserver-no-preload-190877" is "Ready"
	I0110 02:25:09.358160  303444 pod_ready.go:86] duration metric: took 3.382821ms for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.360074  303444 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:10.545355  303444 pod_ready.go:94] pod "kube-controller-manager-no-preload-190877" is "Ready"
	I0110 02:25:10.545386  303444 pod_ready.go:86] duration metric: took 1.185293683s for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:10.744868  303444 pod_ready.go:83] waiting for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.145108  303444 pod_ready.go:94] pod "kube-proxy-hrztb" is "Ready"
	I0110 02:25:11.145138  303444 pod_ready.go:86] duration metric: took 400.191312ms for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.345216  303444 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.744837  303444 pod_ready.go:94] pod "kube-scheduler-no-preload-190877" is "Ready"
	I0110 02:25:11.744864  303444 pod_ready.go:86] duration metric: took 399.621321ms for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.744879  303444 pod_ready.go:40] duration metric: took 2.404179584s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:11.792298  303444 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:25:11.794626  303444 out.go:179] * Done! kubectl is now configured to use "no-preload-190877" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:25:02 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:02.240284426Z" level=info msg="Starting container: cc0da3d047c230692144faa630e2e3a6dc1448d8246afce8c34b33e53d07a47b" id=d2ba5f67-cb3c-48fc-bf71-07e33afdf1ee name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:25:02 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:02.242344479Z" level=info msg="Started container" PID=2147 containerID=cc0da3d047c230692144faa630e2e3a6dc1448d8246afce8c34b33e53d07a47b description=kube-system/coredns-5dd5756b68-vc68c/coredns id=d2ba5f67-cb3c-48fc-bf71-07e33afdf1ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f13d2ebb525e34f376208074c48fed10d1aecdb5c1f34f257ce129e527026d3
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.262857517Z" level=info msg="Running pod sandbox: default/busybox/POD" id=dc06b69d-14c9-4f53-b851-36de8f6adb4f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.262973226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.268228801Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a44c3dd692314dca6c665cc70e1c716c2be622b75218b669de883f637e144003 UID:70c42a4d-ef36-441a-9154-7c8a868b9828 NetNS:/var/run/netns/e306a1eb-8421-4db5-acf0-53ee12f3d23f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003aaad0}] Aliases:map[]}"
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.268256237Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.286359233Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a44c3dd692314dca6c665cc70e1c716c2be622b75218b669de883f637e144003 UID:70c42a4d-ef36-441a-9154-7c8a868b9828 NetNS:/var/run/netns/e306a1eb-8421-4db5-acf0-53ee12f3d23f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003aaad0}] Aliases:map[]}"
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.286518881Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.287480615Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.288271213Z" level=info msg="Ran pod sandbox a44c3dd692314dca6c665cc70e1c716c2be622b75218b669de883f637e144003 with infra container: default/busybox/POD" id=dc06b69d-14c9-4f53-b851-36de8f6adb4f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.289462567Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b31a10c3-484f-40ff-8acc-968903bc5f83 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.289592452Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b31a10c3-484f-40ff-8acc-968903bc5f83 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.289689962Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b31a10c3-484f-40ff-8acc-968903bc5f83 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.290274944Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f90b626f-5d35-47bc-bbbd-0050cb3557f5 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:25:05 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:05.290575737Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.470305234Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f90b626f-5d35-47bc-bbbd-0050cb3557f5 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.471194238Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ebb22ff0-58a5-4f38-9397-150e7e709cf9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.472773848Z" level=info msg="Creating container: default/busybox/busybox" id=feb6235f-75f7-4715-acb4-9344b1dcaa97 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.472939599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.477152578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.477716611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.501185256Z" level=info msg="Created container fee4c224b319e2f5a91d54cdf66d28300fe8364805bf975a2496835a0920ed28: default/busybox/busybox" id=feb6235f-75f7-4715-acb4-9344b1dcaa97 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.501819962Z" level=info msg="Starting container: fee4c224b319e2f5a91d54cdf66d28300fe8364805bf975a2496835a0920ed28" id=7a136fa1-cbec-4d2d-ae92-9c568ac5d054 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:25:06 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:06.503509136Z" level=info msg="Started container" PID=2229 containerID=fee4c224b319e2f5a91d54cdf66d28300fe8364805bf975a2496835a0920ed28 description=default/busybox/busybox id=7a136fa1-cbec-4d2d-ae92-9c568ac5d054 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a44c3dd692314dca6c665cc70e1c716c2be622b75218b669de883f637e144003
	Jan 10 02:25:12 old-k8s-version-188604 crio[771]: time="2026-01-10T02:25:12.046122596Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	fee4c224b319e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   a44c3dd692314       busybox                                          default
	cc0da3d047c23       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   7f13d2ebb525e       coredns-5dd5756b68-vc68c                         kube-system
	ef4fb584f5ae0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   7b40ccd225bba       storage-provisioner                              kube-system
	57b59d52dc506       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   61f0fd9a28bb1       kindnet-25dkr                                    kube-system
	e944fc6e66f42       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   f97fb4deeeefe       kube-proxy-c445q                                 kube-system
	6fde901b7f090       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   c736da09bdf9d       kube-apiserver-old-k8s-version-188604            kube-system
	7e64b74841591       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   b095e65ccaa17       etcd-old-k8s-version-188604                      kube-system
	cfbc5f1c2a703       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   c68a297fb5b57       kube-controller-manager-old-k8s-version-188604   kube-system
	cfe9602e25577       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   9c75351eff298       kube-scheduler-old-k8s-version-188604            kube-system
	
	
	==> coredns [cc0da3d047c230692144faa630e2e3a6dc1448d8246afce8c34b33e53d07a47b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42000 - 67 "HINFO IN 6124398123336844772.7001029110413924706. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.089022363s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-188604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-188604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=old-k8s-version-188604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-188604
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:25:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:25:06 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:25:06 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:25:06 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:25:06 +0000   Sat, 10 Jan 2026 02:25:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-188604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                8835f89d-8806-4482-b07d-960e07e8dff0
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-vc68c                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-188604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-25dkr                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-188604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-188604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-c445q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-188604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s   kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s   kubelet          Node old-k8s-version-188604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s   kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-188604 event: Registered Node old-k8s-version-188604 in Controller
	  Normal  NodeReady                12s   kubelet          Node old-k8s-version-188604 status is now: NodeReady
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [7e64b748415917fd72194a94d1a60fd971b4ca7cb2a985a1cb7d9662d5c5c684] <==
	{"level":"info","ts":"2026-01-10T02:24:31.244272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:31.244308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:31.24434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:31.245214Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:24:31.24574Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-188604 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:24:31.245784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:31.24738Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:24:31.247415Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:31.247517Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:24:31.247549Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:24:31.248774Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:24:31.249972Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:31.250002Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:31.252785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T02:25:03.396275Z","caller":"traceutil/trace.go:171","msg":"trace[687982862] linearizableReadLoop","detail":"{readStateIndex:417; appliedIndex:416; }","duration":"116.118837ms","start":"2026-01-10T02:25:03.280138Z","end":"2026-01-10T02:25:03.396257Z","steps":["trace[687982862] 'read index received'  (duration: 115.965076ms)","trace[687982862] 'applied index is now lower than readState.Index'  (duration: 153.253µs)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T02:25:03.396361Z","caller":"traceutil/trace.go:171","msg":"trace[1868089156] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"130.663617ms","start":"2026-01-10T02:25:03.265676Z","end":"2026-01-10T02:25:03.39634Z","steps":["trace[1868089156] 'process raft request'  (duration: 130.44262ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T02:25:03.396415Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.262562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-188604\" ","response":"range_response_count:1 size:5540"}
	{"level":"info","ts":"2026-01-10T02:25:03.396487Z","caller":"traceutil/trace.go:171","msg":"trace[1409281222] range","detail":"{range_begin:/registry/minions/old-k8s-version-188604; range_end:; response_count:1; response_revision:403; }","duration":"116.363649ms","start":"2026-01-10T02:25:03.280114Z","end":"2026-01-10T02:25:03.396477Z","steps":["trace[1409281222] 'agreement among raft nodes before linearized reading'  (duration: 116.205332ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T02:25:03.679353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.407217ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-10T02:25:03.679422Z","caller":"traceutil/trace.go:171","msg":"trace[506880716] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:403; }","duration":"233.491408ms","start":"2026-01-10T02:25:03.445916Z","end":"2026-01-10T02:25:03.679408Z","steps":["trace[506880716] 'range keys from in-memory index tree'  (duration: 233.309323ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T02:25:03.679497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.998512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41231"}
	{"level":"info","ts":"2026-01-10T02:25:03.679523Z","caller":"traceutil/trace.go:171","msg":"trace[1435692694] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:403; }","duration":"199.033187ms","start":"2026-01-10T02:25:03.480482Z","end":"2026-01-10T02:25:03.679516Z","steps":["trace[1435692694] 'range keys from in-memory index tree'  (duration: 198.840613ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T02:25:04.449316Z","caller":"traceutil/trace.go:171","msg":"trace[333761672] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"128.224783ms","start":"2026-01-10T02:25:04.321071Z","end":"2026-01-10T02:25:04.449296Z","steps":["trace[333761672] 'process raft request'  (duration: 128.152639ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T02:25:04.449325Z","caller":"traceutil/trace.go:171","msg":"trace[190429944] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"130.819424ms","start":"2026-01-10T02:25:04.318483Z","end":"2026-01-10T02:25:04.449302Z","steps":["trace[190429944] 'process raft request'  (duration: 125.907708ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T02:25:04.952872Z","caller":"traceutil/trace.go:171","msg":"trace[614152246] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"115.574778ms","start":"2026-01-10T02:25:04.837277Z","end":"2026-01-10T02:25:04.952852Z","steps":["trace[614152246] 'process raft request'  (duration: 115.442496ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:25:13 up  1:07,  0 user,  load average: 3.93, 3.50, 2.27
	Linux old-k8s-version-188604 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [57b59d52dc506b61def6ec4b74ce755c368ed30c573e50d4aadf75ad98f3f55f] <==
	I0110 02:24:51.206157       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:24:51.206401       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:24:51.206534       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:24:51.206554       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:24:51.206575       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:24:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:24:51.505365       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:24:51.505515       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:24:51.505555       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:24:51.505818       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:24:51.905799       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:24:51.905845       1 metrics.go:72] Registering metrics
	I0110 02:24:51.905949       1 controller.go:711] "Syncing nftables rules"
	I0110 02:25:01.414599       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:25:01.414662       1 main.go:301] handling current node
	I0110 02:25:11.406722       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:25:11.406768       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6fde901b7f0905cdc359c4a7fc37f760a6a579c26607ea4d9692fd9f0bbda58d] <==
	I0110 02:24:32.783053       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 02:24:32.783184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:24:32.783199       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 02:24:32.784052       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 02:24:32.784091       1 aggregator.go:166] initial CRD sync complete...
	I0110 02:24:32.784102       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 02:24:32.784108       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:24:32.784114       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:24:32.784609       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 02:24:32.840233       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:24:33.687524       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0110 02:24:33.691183       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:24:33.691201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 02:24:34.085926       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:24:34.120089       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:24:34.196969       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:24:34.202913       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0110 02:24:34.204161       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 02:24:34.207936       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:24:34.743932       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 02:24:35.648654       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 02:24:35.660742       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:24:35.670721       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0110 02:24:48.457763       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:24:48.538422       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cfbc5f1c2a703df0fc294ba3efa9fb6d1ea75b2eb2c209379d4084ae4c9dbf5d] <==
	I0110 02:24:47.806185       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0110 02:24:47.806324       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-188604"
	I0110 02:24:47.806330       1 event.go:307] "Event occurred" object="old-k8s-version-188604" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-188604 event: Registered Node old-k8s-version-188604 in Controller"
	I0110 02:24:47.806385       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:24:48.126266       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:24:48.185788       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:24:48.185834       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 02:24:48.473095       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c445q"
	I0110 02:24:48.476975       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-25dkr"
	I0110 02:24:48.553031       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0110 02:24:48.631276       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-t4ppw"
	I0110 02:24:48.642382       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vc68c"
	I0110 02:24:48.671656       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.915337ms"
	I0110 02:24:48.698327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.361541ms"
	I0110 02:24:48.698859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="217.153µs"
	I0110 02:24:48.843669       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0110 02:24:48.855397       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-t4ppw"
	I0110 02:24:48.865202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.910985ms"
	I0110 02:24:48.885614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.354295ms"
	I0110 02:24:48.885873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="209.387µs"
	I0110 02:25:01.884250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.558µs"
	I0110 02:25:01.900225       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.423µs"
	I0110 02:25:02.807895       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0110 02:25:02.837094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.408403ms"
	I0110 02:25:02.837245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.793µs"
	
	
	==> kube-proxy [e944fc6e66f428a7b8b67bd3482f699e663df4c3c2d8d65d6e3b2c67e6d04431] <==
	I0110 02:24:48.944818       1 server_others.go:69] "Using iptables proxy"
	I0110 02:24:48.958616       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0110 02:24:48.985915       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:24:48.988522       1 server_others.go:152] "Using iptables Proxier"
	I0110 02:24:48.988706       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 02:24:48.988742       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 02:24:48.988806       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 02:24:48.989241       1 server.go:846] "Version info" version="v1.28.0"
	I0110 02:24:48.989261       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:24:48.991275       1 config.go:188] "Starting service config controller"
	I0110 02:24:48.991303       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 02:24:48.991336       1 config.go:97] "Starting endpoint slice config controller"
	I0110 02:24:48.991347       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 02:24:48.991950       1 config.go:315] "Starting node config controller"
	I0110 02:24:48.992048       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 02:24:49.092046       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 02:24:49.092115       1 shared_informer.go:318] Caches are synced for service config
	I0110 02:24:49.093191       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [cfe9602e25577815e85c3dcb75909e4e500a4619c33d4e94d97db1bc0cb312e3] <==
	W0110 02:24:32.754634       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0110 02:24:32.754649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0110 02:24:32.754757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0110 02:24:32.754826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0110 02:24:32.754863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0110 02:24:32.754967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0110 02:24:32.755007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0110 02:24:32.754970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0110 02:24:32.755108       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0110 02:24:32.755110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0110 02:24:32.755126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0110 02:24:32.755139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0110 02:24:33.611073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0110 02:24:33.611132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0110 02:24:33.634320       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0110 02:24:33.634353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0110 02:24:33.652531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0110 02:24:33.652556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0110 02:24:33.661833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0110 02:24:33.661861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0110 02:24:33.917331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0110 02:24:33.917372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0110 02:24:33.951772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0110 02:24:33.951808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0110 02:24:34.352098       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 02:24:47 old-k8s-version-188604 kubelet[1398]: I0110 02:24:47.715251    1398 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.482485    1398 topology_manager.go:215] "Topology Admit Handler" podUID="afdd3e61-ba2d-499d-a5bb-6ec541371d71" podNamespace="kube-system" podName="kube-proxy-c445q"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.493941    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afdd3e61-ba2d-499d-a5bb-6ec541371d71-lib-modules\") pod \"kube-proxy-c445q\" (UID: \"afdd3e61-ba2d-499d-a5bb-6ec541371d71\") " pod="kube-system/kube-proxy-c445q"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.494005    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afdd3e61-ba2d-499d-a5bb-6ec541371d71-xtables-lock\") pod \"kube-proxy-c445q\" (UID: \"afdd3e61-ba2d-499d-a5bb-6ec541371d71\") " pod="kube-system/kube-proxy-c445q"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.494043    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmln\" (UniqueName: \"kubernetes.io/projected/afdd3e61-ba2d-499d-a5bb-6ec541371d71-kube-api-access-ksmln\") pod \"kube-proxy-c445q\" (UID: \"afdd3e61-ba2d-499d-a5bb-6ec541371d71\") " pod="kube-system/kube-proxy-c445q"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.494074    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afdd3e61-ba2d-499d-a5bb-6ec541371d71-kube-proxy\") pod \"kube-proxy-c445q\" (UID: \"afdd3e61-ba2d-499d-a5bb-6ec541371d71\") " pod="kube-system/kube-proxy-c445q"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.499501    1398 topology_manager.go:215] "Topology Admit Handler" podUID="0d70b272-4962-4030-b190-a69657eab2cd" podNamespace="kube-system" podName="kindnet-25dkr"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.595147    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0d70b272-4962-4030-b190-a69657eab2cd-cni-cfg\") pod \"kindnet-25dkr\" (UID: \"0d70b272-4962-4030-b190-a69657eab2cd\") " pod="kube-system/kindnet-25dkr"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.595214    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d70b272-4962-4030-b190-a69657eab2cd-lib-modules\") pod \"kindnet-25dkr\" (UID: \"0d70b272-4962-4030-b190-a69657eab2cd\") " pod="kube-system/kindnet-25dkr"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.595247    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5dgd\" (UniqueName: \"kubernetes.io/projected/0d70b272-4962-4030-b190-a69657eab2cd-kube-api-access-h5dgd\") pod \"kindnet-25dkr\" (UID: \"0d70b272-4962-4030-b190-a69657eab2cd\") " pod="kube-system/kindnet-25dkr"
	Jan 10 02:24:48 old-k8s-version-188604 kubelet[1398]: I0110 02:24:48.595308    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d70b272-4962-4030-b190-a69657eab2cd-xtables-lock\") pod \"kindnet-25dkr\" (UID: \"0d70b272-4962-4030-b190-a69657eab2cd\") " pod="kube-system/kindnet-25dkr"
	Jan 10 02:24:49 old-k8s-version-188604 kubelet[1398]: I0110 02:24:49.789604    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c445q" podStartSLOduration=1.789551958 podCreationTimestamp="2026-01-10 02:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:24:49.78955334 +0000 UTC m=+14.168387239" watchObservedRunningTime="2026-01-10 02:24:49.789551958 +0000 UTC m=+14.168385853"
	Jan 10 02:25:01 old-k8s-version-188604 kubelet[1398]: I0110 02:25:01.858241    1398 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 10 02:25:01 old-k8s-version-188604 kubelet[1398]: I0110 02:25:01.884342    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-25dkr" podStartSLOduration=11.828276834 podCreationTimestamp="2026-01-10 02:24:48 +0000 UTC" firstStartedPulling="2026-01-10 02:24:48.815175576 +0000 UTC m=+13.194009466" lastFinishedPulling="2026-01-10 02:24:50.871167414 +0000 UTC m=+15.250001303" observedRunningTime="2026-01-10 02:24:51.798673123 +0000 UTC m=+16.177507031" watchObservedRunningTime="2026-01-10 02:25:01.884268671 +0000 UTC m=+26.263102568"
	Jan 10 02:25:01 old-k8s-version-188604 kubelet[1398]: I0110 02:25:01.884573    1398 topology_manager.go:215] "Topology Admit Handler" podUID="c1dc1059-c986-4d7a-80ab-b983545f5602" podNamespace="kube-system" podName="coredns-5dd5756b68-vc68c"
	Jan 10 02:25:01 old-k8s-version-188604 kubelet[1398]: I0110 02:25:01.886206    1398 topology_manager.go:215] "Topology Admit Handler" podUID="ef938075-c2da-49a3-a955-89f2a00bacf7" podNamespace="kube-system" podName="storage-provisioner"
	Jan 10 02:25:01 old-k8s-version-188604 kubelet[1398]: I0110 02:25:01.993635    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1dc1059-c986-4d7a-80ab-b983545f5602-config-volume\") pod \"coredns-5dd5756b68-vc68c\" (UID: \"c1dc1059-c986-4d7a-80ab-b983545f5602\") " pod="kube-system/coredns-5dd5756b68-vc68c"
	Jan 10 02:25:01 old-k8s-version-188604 kubelet[1398]: I0110 02:25:01.993710    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ef938075-c2da-49a3-a955-89f2a00bacf7-tmp\") pod \"storage-provisioner\" (UID: \"ef938075-c2da-49a3-a955-89f2a00bacf7\") " pod="kube-system/storage-provisioner"
	Jan 10 02:25:01 old-k8s-version-188604 kubelet[1398]: I0110 02:25:01.993897    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mrv\" (UniqueName: \"kubernetes.io/projected/ef938075-c2da-49a3-a955-89f2a00bacf7-kube-api-access-h6mrv\") pod \"storage-provisioner\" (UID: \"ef938075-c2da-49a3-a955-89f2a00bacf7\") " pod="kube-system/storage-provisioner"
	Jan 10 02:25:01 old-k8s-version-188604 kubelet[1398]: I0110 02:25:01.993949    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz8xx\" (UniqueName: \"kubernetes.io/projected/c1dc1059-c986-4d7a-80ab-b983545f5602-kube-api-access-cz8xx\") pod \"coredns-5dd5756b68-vc68c\" (UID: \"c1dc1059-c986-4d7a-80ab-b983545f5602\") " pod="kube-system/coredns-5dd5756b68-vc68c"
	Jan 10 02:25:02 old-k8s-version-188604 kubelet[1398]: I0110 02:25:02.820265    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.820212303 podCreationTimestamp="2026-01-10 02:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:02.82017657 +0000 UTC m=+27.199010479" watchObservedRunningTime="2026-01-10 02:25:02.820212303 +0000 UTC m=+27.199046219"
	Jan 10 02:25:02 old-k8s-version-188604 kubelet[1398]: I0110 02:25:02.829436    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vc68c" podStartSLOduration=14.829380411 podCreationTimestamp="2026-01-10 02:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:02.829307768 +0000 UTC m=+27.208141664" watchObservedRunningTime="2026-01-10 02:25:02.829380411 +0000 UTC m=+27.208214307"
	Jan 10 02:25:04 old-k8s-version-188604 kubelet[1398]: I0110 02:25:04.960669    1398 topology_manager.go:215] "Topology Admit Handler" podUID="70c42a4d-ef36-441a-9154-7c8a868b9828" podNamespace="default" podName="busybox"
	Jan 10 02:25:05 old-k8s-version-188604 kubelet[1398]: I0110 02:25:05.111052    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghg7q\" (UniqueName: \"kubernetes.io/projected/70c42a4d-ef36-441a-9154-7c8a868b9828-kube-api-access-ghg7q\") pod \"busybox\" (UID: \"70c42a4d-ef36-441a-9154-7c8a868b9828\") " pod="default/busybox"
	Jan 10 02:25:06 old-k8s-version-188604 kubelet[1398]: I0110 02:25:06.829189    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.648418073 podCreationTimestamp="2026-01-10 02:25:04 +0000 UTC" firstStartedPulling="2026-01-10 02:25:05.289916602 +0000 UTC m=+29.668750488" lastFinishedPulling="2026-01-10 02:25:06.470645419 +0000 UTC m=+30.849479304" observedRunningTime="2026-01-10 02:25:06.828855731 +0000 UTC m=+31.207689626" watchObservedRunningTime="2026-01-10 02:25:06.829146889 +0000 UTC m=+31.207980784"
	
	
	==> storage-provisioner [ef4fb584f5ae07cafef427d8f909007161b8df691c31df06183f9896f6089f92] <==
	I0110 02:25:02.250970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:25:02.261201       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:25:02.261259       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 02:25:02.268961       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:25:02.269092       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9dd8f8c3-352c-4a42-bd82-a8d8489739cb", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-188604_f2fb8395-87ae-4042-ad9d-3855d6141c39 became leader
	I0110 02:25:02.269129       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-188604_f2fb8395-87ae-4042-ad9d-3855d6141c39!
	I0110 02:25:02.369254       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-188604_f2fb8395-87ae-4042-ad9d-3855d6141c39!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188604 -n old-k8s-version-188604
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-188604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.138264ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:25:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-872415 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-872415 describe deploy/metrics-server -n kube-system: exit status 1 (56.980064ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-872415 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-872415
helpers_test.go:244: (dbg) docker inspect embed-certs-872415:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30",
	        "Created": "2026-01-10T02:24:30.466412403Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307892,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:24:30.517297609Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/hosts",
	        "LogPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30-json.log",
	        "Name": "/embed-certs-872415",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-872415:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-872415",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30",
	                "LowerDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-872415",
	                "Source": "/var/lib/docker/volumes/embed-certs-872415/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-872415",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-872415",
	                "name.minikube.sigs.k8s.io": "embed-certs-872415",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7c059c1dfa8f583d058449b00a0f4d8a2ca894e258f413e73f7174f8996db0d7",
	            "SandboxKey": "/var/run/docker/netns/7c059c1dfa8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-872415": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9ad01e33e846d95581fd92bc0e4f762a980374124d9ad12032ce9b9cc753743a",
	                    "EndpointID": "e78860128cb46f60a86ad92734b492df410e6d70ac4920baae178282663d8c6b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ba:8f:dd:0f:22:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-872415",
	                        "5c3ed37b709e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872415 -n embed-certs-872415
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-872415 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-872415 logs -n 25: (1.035011472s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-647049 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo docker system info                                                                                                                                 │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cri-dockerd --version                                                                                                                              │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo containerd config dump                                                                                                                             │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo crio config                                                                                                                                        │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ delete  │ -p bridge-647049                                                                                                                                                         │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:25 UTC │
	│ delete  │ -p disable-driver-mounts-249405                                                                                                                                          │ disable-driver-mounts-249405 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p old-k8s-version-188604 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:25:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:25:00.392194  317309 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:25:00.392279  317309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:25:00.392283  317309 out.go:374] Setting ErrFile to fd 2...
	I0110 02:25:00.392287  317309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:25:00.392477  317309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:25:00.392976  317309 out.go:368] Setting JSON to false
	I0110 02:25:00.394269  317309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4049,"bootTime":1768007851,"procs":457,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:25:00.394317  317309 start.go:143] virtualization: kvm guest
	I0110 02:25:00.396627  317309 out.go:179] * [default-k8s-diff-port-313784] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:25:00.397957  317309 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:25:00.397970  317309 notify.go:221] Checking for updates...
	I0110 02:25:00.400242  317309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:25:00.401485  317309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:25:00.402704  317309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:25:00.406317  317309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:25:00.407356  317309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:25:00.409141  317309 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:00.409280  317309 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:00.409426  317309 config.go:182] Loaded profile config "old-k8s-version-188604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:25:00.409539  317309 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:25:00.434761  317309 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:25:00.434854  317309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:25:00.493739  317309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-10 02:25:00.484502123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:25:00.493836  317309 docker.go:319] overlay module found
	I0110 02:25:00.495405  317309 out.go:179] * Using the docker driver based on user configuration
	I0110 02:25:00.496680  317309 start.go:309] selected driver: docker
	I0110 02:25:00.496709  317309 start.go:928] validating driver "docker" against <nil>
	I0110 02:25:00.496736  317309 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:25:00.497259  317309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:25:00.556702  317309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-10 02:25:00.547232481 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:25:00.556869  317309 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:25:00.557101  317309 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:00.558714  317309 out.go:179] * Using Docker driver with root privileges
	I0110 02:25:00.559916  317309 cni.go:84] Creating CNI manager for ""
	I0110 02:25:00.559994  317309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:00.560008  317309 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:25:00.560077  317309 start.go:353] cluster config:
	{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:00.561374  317309 out.go:179] * Starting "default-k8s-diff-port-313784" primary control-plane node in "default-k8s-diff-port-313784" cluster
	I0110 02:25:00.562518  317309 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:25:00.563663  317309 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:25:00.564638  317309 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:00.564665  317309 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:25:00.564674  317309 cache.go:65] Caching tarball of preloaded images
	I0110 02:25:00.564733  317309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:25:00.564744  317309 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:25:00.564755  317309 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:25:00.564846  317309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:25:00.564875  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json: {Name:mke69fde4131df0a8ccfd9b1b2b8ce80d8f28b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:00.585110  317309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:25:00.585128  317309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:25:00.585142  317309 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:25:00.585165  317309 start.go:360] acquireMachinesLock for default-k8s-diff-port-313784: {Name:mk0116f4190c69f6825824fe0766dd2c4c328e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:00.585241  317309 start.go:364] duration metric: took 62.883µs to acquireMachinesLock for "default-k8s-diff-port-313784"
	I0110 02:25:00.585269  317309 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:25:00.585318  317309 start.go:125] createHost starting for "" (driver="docker")
	W0110 02:24:56.474826  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:24:58.973310  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:24:57.293595  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:24:59.793590  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:00.318770  298671 node_ready.go:57] node "old-k8s-version-188604" has "Ready":"False" status (will retry)
	I0110 02:25:02.319491  298671 node_ready.go:49] node "old-k8s-version-188604" is "Ready"
	I0110 02:25:02.319522  298671 node_ready.go:38] duration metric: took 13.504308579s for node "old-k8s-version-188604" to be "Ready" ...
	I0110 02:25:02.319539  298671 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:02.319592  298671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:02.333933  298671 api_server.go:72] duration metric: took 14.03338025s to wait for apiserver process to appear ...
	I0110 02:25:02.333964  298671 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:02.333988  298671 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:25:02.340732  298671 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:25:02.341897  298671 api_server.go:141] control plane version: v1.28.0
	I0110 02:25:02.341923  298671 api_server.go:131] duration metric: took 7.952397ms to wait for apiserver health ...
	I0110 02:25:02.341931  298671 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:02.345474  298671 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:02.345511  298671 system_pods.go:61] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.345517  298671 system_pods.go:61] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.345522  298671 system_pods.go:61] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.345528  298671 system_pods.go:61] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.345535  298671 system_pods.go:61] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.345538  298671 system_pods.go:61] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.345541  298671 system_pods.go:61] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.345546  298671 system_pods.go:61] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.345553  298671 system_pods.go:74] duration metric: took 3.616799ms to wait for pod list to return data ...
	I0110 02:25:02.345561  298671 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:02.347940  298671 default_sa.go:45] found service account: "default"
	I0110 02:25:02.347962  298671 default_sa.go:55] duration metric: took 2.394187ms for default service account to be created ...
	I0110 02:25:02.347972  298671 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:02.351378  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.351406  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.351414  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.351422  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.351428  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.351434  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.351439  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.351445  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.351454  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.351482  298671 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:02.552922  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.552955  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.552963  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.552971  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.552975  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.552979  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.552983  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.552994  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.553002  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.856387  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.856413  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Running
	I0110 02:25:02.856419  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.856422  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.856426  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.856430  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.856435  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.856440  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.856445  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Running
	I0110 02:25:02.856454  298671 system_pods.go:126] duration metric: took 508.475351ms to wait for k8s-apps to be running ...
	I0110 02:25:02.856475  298671 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:02.856532  298671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:02.869806  298671 system_svc.go:56] duration metric: took 13.330594ms WaitForService to wait for kubelet
	I0110 02:25:02.869835  298671 kubeadm.go:587] duration metric: took 14.569287464s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:02.869850  298671 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:02.872557  298671 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:02.872579  298671 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:02.872592  298671 node_conditions.go:105] duration metric: took 2.737302ms to run NodePressure ...
	I0110 02:25:02.872603  298671 start.go:242] waiting for startup goroutines ...
	I0110 02:25:02.872610  298671 start.go:247] waiting for cluster config update ...
	I0110 02:25:02.872619  298671 start.go:256] writing updated cluster config ...
	I0110 02:25:02.872932  298671 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:02.876611  298671 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:02.881253  298671 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.885713  298671 pod_ready.go:94] pod "coredns-5dd5756b68-vc68c" is "Ready"
	I0110 02:25:02.885731  298671 pod_ready.go:86] duration metric: took 4.45863ms for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.889064  298671 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.893600  298671 pod_ready.go:94] pod "etcd-old-k8s-version-188604" is "Ready"
	I0110 02:25:02.893623  298671 pod_ready.go:86] duration metric: took 4.538704ms for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.895829  298671 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.899591  298671 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-188604" is "Ready"
	I0110 02:25:02.899611  298671 pod_ready.go:86] duration metric: took 3.76343ms for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.902330  298671 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.397653  298671 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-188604" is "Ready"
	I0110 02:25:03.397681  298671 pod_ready.go:86] duration metric: took 495.334365ms for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.681351  298671 pod_ready.go:83] waiting for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.881037  298671 pod_ready.go:94] pod "kube-proxy-c445q" is "Ready"
	I0110 02:25:03.881067  298671 pod_ready.go:86] duration metric: took 199.676144ms for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.081651  298671 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.481407  298671 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-188604" is "Ready"
	I0110 02:25:04.481434  298671 pod_ready.go:86] duration metric: took 399.75895ms for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.481446  298671 pod_ready.go:40] duration metric: took 1.604804736s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:04.526151  298671 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I0110 02:25:04.588428  298671 out.go:203] 
	W0110 02:25:04.608947  298671 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 02:25:04.621561  298671 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:25:04.623417  298671 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-188604" cluster and "default" namespace by default
	I0110 02:25:00.586927  317309 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:25:00.587148  317309 start.go:159] libmachine.API.Create for "default-k8s-diff-port-313784" (driver="docker")
	I0110 02:25:00.587180  317309 client.go:173] LocalClient.Create starting
	I0110 02:25:00.587290  317309 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 02:25:00.587336  317309 main.go:144] libmachine: Decoding PEM data...
	I0110 02:25:00.587362  317309 main.go:144] libmachine: Parsing certificate...
	I0110 02:25:00.587425  317309 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 02:25:00.587453  317309 main.go:144] libmachine: Decoding PEM data...
	I0110 02:25:00.587477  317309 main.go:144] libmachine: Parsing certificate...
	I0110 02:25:00.587876  317309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:25:00.603713  317309 cli_runner.go:211] docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:25:00.603783  317309 network_create.go:284] running [docker network inspect default-k8s-diff-port-313784] to gather additional debugging logs...
	I0110 02:25:00.603799  317309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784
	W0110 02:25:00.619531  317309 cli_runner.go:211] docker network inspect default-k8s-diff-port-313784 returned with exit code 1
	I0110 02:25:00.619561  317309 network_create.go:287] error running [docker network inspect default-k8s-diff-port-313784]: docker network inspect default-k8s-diff-port-313784: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-313784 not found
	I0110 02:25:00.619592  317309 network_create.go:289] output of [docker network inspect default-k8s-diff-port-313784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-313784 not found
	
	** /stderr **
	I0110 02:25:00.619686  317309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:25:00.636089  317309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
	I0110 02:25:00.636919  317309 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6b93c57cdce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:4c:65:68:38:06} reservation:<nil>}
	I0110 02:25:00.637882  317309 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c494a40b219 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:38:5d:78:96:da} reservation:<nil>}
	I0110 02:25:00.638718  317309 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a77220e3dd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:16:c1:44:08:5d} reservation:<nil>}
	I0110 02:25:00.639454  317309 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5bb0788a00cd IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:07:16:ea:24:2b} reservation:<nil>}
	I0110 02:25:00.640360  317309 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f6fef0}
	I0110 02:25:00.640387  317309 network_create.go:124] attempt to create docker network default-k8s-diff-port-313784 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0110 02:25:00.640422  317309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 default-k8s-diff-port-313784
	I0110 02:25:00.689478  317309 network_create.go:108] docker network default-k8s-diff-port-313784 192.168.94.0/24 created
	I0110 02:25:00.689512  317309 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-313784" container
	I0110 02:25:00.689566  317309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:25:00.706880  317309 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-313784 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:25:00.724154  317309 oci.go:103] Successfully created a docker volume default-k8s-diff-port-313784
	I0110 02:25:00.724237  317309 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-313784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --entrypoint /usr/bin/test -v default-k8s-diff-port-313784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:25:01.125000  317309 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-313784
	I0110 02:25:01.125098  317309 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:01.125127  317309 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:25:01.125186  317309 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-313784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:25:05.011931  317309 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-313784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.886674134s)
	I0110 02:25:05.011970  317309 kic.go:203] duration metric: took 3.886836272s to extract preloaded images to volume ...
	W0110 02:25:05.012038  317309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 02:25:05.012067  317309 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 02:25:05.012103  317309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:25:05.077023  317309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-313784 --name default-k8s-diff-port-313784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --network default-k8s-diff-port-313784 --ip 192.168.94.2 --volume default-k8s-diff-port-313784:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:25:05.356393  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Running}}
	I0110 02:25:05.378292  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	W0110 02:25:00.975628  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:25:03.473338  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	I0110 02:25:05.473260  306368 node_ready.go:49] node "embed-certs-872415" is "Ready"
	I0110 02:25:05.473290  306368 node_ready.go:38] duration metric: took 13.003254802s for node "embed-certs-872415" to be "Ready" ...
	I0110 02:25:05.473307  306368 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:05.473367  306368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:05.487970  306368 api_server.go:72] duration metric: took 13.381037629s to wait for apiserver process to appear ...
	I0110 02:25:05.487997  306368 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:05.488020  306368 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:25:05.492948  306368 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0110 02:25:05.493802  306368 api_server.go:141] control plane version: v1.35.0
	I0110 02:25:05.493822  306368 api_server.go:131] duration metric: took 5.818404ms to wait for apiserver health ...
	I0110 02:25:05.493830  306368 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:05.497635  306368 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:05.497667  306368 system_pods.go:61] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.497675  306368 system_pods.go:61] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.497684  306368 system_pods.go:61] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.497693  306368 system_pods.go:61] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.497698  306368 system_pods.go:61] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.497703  306368 system_pods.go:61] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.497714  306368 system_pods.go:61] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.497724  306368 system_pods.go:61] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:05.497734  306368 system_pods.go:74] duration metric: took 3.897808ms to wait for pod list to return data ...
	I0110 02:25:05.497754  306368 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:05.500151  306368 default_sa.go:45] found service account: "default"
	I0110 02:25:05.500168  306368 default_sa.go:55] duration metric: took 2.404258ms for default service account to be created ...
	I0110 02:25:05.500178  306368 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:05.502707  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:05.502735  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.502743  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.502763  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.502772  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.502779  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.502788  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.502801  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.502812  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:05.502851  306368 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:05.732008  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:05.732045  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.732052  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.732058  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.732061  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.732065  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.732068  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.732077  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.732135  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	W0110 02:25:01.793762  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:03.803678  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:06.292883  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	I0110 02:25:06.048077  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:06.048106  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:06.048114  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:06.048120  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:06.048124  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:06.048128  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:06.048131  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:06.048136  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:06.048146  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:06.048157  306368 system_pods.go:126] duration metric: took 547.973131ms to wait for k8s-apps to be running ...
	I0110 02:25:06.048165  306368 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:06.048209  306368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:06.064116  306368 system_svc.go:56] duration metric: took 15.940126ms WaitForService to wait for kubelet
	I0110 02:25:06.064146  306368 kubeadm.go:587] duration metric: took 13.957218617s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:06.064170  306368 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:06.066699  306368 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:06.066724  306368 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:06.066737  306368 node_conditions.go:105] duration metric: took 2.561883ms to run NodePressure ...
	I0110 02:25:06.066749  306368 start.go:242] waiting for startup goroutines ...
	I0110 02:25:06.066755  306368 start.go:247] waiting for cluster config update ...
	I0110 02:25:06.066765  306368 start.go:256] writing updated cluster config ...
	I0110 02:25:06.067037  306368 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:06.071779  306368 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:06.075168  306368 pod_ready.go:83] waiting for pod "coredns-7d764666f9-lfdgm" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.079050  306368 pod_ready.go:94] pod "coredns-7d764666f9-lfdgm" is "Ready"
	I0110 02:25:06.079072  306368 pod_ready.go:86] duration metric: took 3.879968ms for pod "coredns-7d764666f9-lfdgm" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.081862  306368 pod_ready.go:83] waiting for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.085524  306368 pod_ready.go:94] pod "etcd-embed-certs-872415" is "Ready"
	I0110 02:25:06.085544  306368 pod_ready.go:86] duration metric: took 3.660409ms for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.176936  306368 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.181365  306368 pod_ready.go:94] pod "kube-apiserver-embed-certs-872415" is "Ready"
	I0110 02:25:06.181390  306368 pod_ready.go:86] duration metric: took 4.431195ms for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.183390  306368 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.476403  306368 pod_ready.go:94] pod "kube-controller-manager-embed-certs-872415" is "Ready"
	I0110 02:25:06.476433  306368 pod_ready.go:86] duration metric: took 293.020723ms for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.676583  306368 pod_ready.go:83] waiting for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.075937  306368 pod_ready.go:94] pod "kube-proxy-47n8d" is "Ready"
	I0110 02:25:07.075962  306368 pod_ready.go:86] duration metric: took 399.357725ms for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.275762  306368 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.676288  306368 pod_ready.go:94] pod "kube-scheduler-embed-certs-872415" is "Ready"
	I0110 02:25:07.676321  306368 pod_ready.go:86] duration metric: took 400.536667ms for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.676336  306368 pod_ready.go:40] duration metric: took 1.604527147s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:07.719956  306368 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:25:07.721934  306368 out.go:179] * Done! kubectl is now configured to use "embed-certs-872415" cluster and "default" namespace by default
	I0110 02:25:05.401124  317309 cli_runner.go:164] Run: docker exec default-k8s-diff-port-313784 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:25:05.461291  317309 oci.go:144] the created container "default-k8s-diff-port-313784" has a running status.
	I0110 02:25:05.461325  317309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa...
	I0110 02:25:05.527181  317309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:25:05.553440  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:25:05.571338  317309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:25:05.571363  317309 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-313784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:25:05.625350  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:25:05.643845  317309 machine.go:94] provisionDockerMachine start ...
	I0110 02:25:05.643986  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:05.668140  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:05.668505  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:05.668526  317309 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:25:05.669701  317309 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53044->127.0.0.1:33105: read: connection reset by peer
	I0110 02:25:08.804919  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:25:08.804953  317309 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-313784"
	I0110 02:25:08.805029  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:08.830536  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:08.830857  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:08.830897  317309 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313784 && echo "default-k8s-diff-port-313784" | sudo tee /etc/hostname
	I0110 02:25:08.970898  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:25:08.970999  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:08.989931  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:08.990192  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:08.990221  317309 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313784/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:25:09.119431  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:25:09.119464  317309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:25:09.119509  317309 ubuntu.go:190] setting up certificates
	I0110 02:25:09.119530  317309 provision.go:84] configureAuth start
	I0110 02:25:09.119597  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.137764  317309 provision.go:143] copyHostCerts
	I0110 02:25:09.137826  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:25:09.137839  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:25:09.137920  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:25:09.138022  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:25:09.138036  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:25:09.138076  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:25:09.138167  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:25:09.138178  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:25:09.138216  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:25:09.138278  317309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313784 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-313784 localhost minikube]
	I0110 02:25:09.239098  317309 provision.go:177] copyRemoteCerts
	I0110 02:25:09.239146  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:25:09.239181  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.257663  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.351428  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:25:09.372575  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 02:25:09.389338  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:25:09.406058  317309 provision.go:87] duration metric: took 286.505777ms to configureAuth
	I0110 02:25:09.406083  317309 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:25:09.406234  317309 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:09.406322  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.424190  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:09.424409  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:09.424431  317309 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:25:09.693344  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:25:09.693366  317309 machine.go:97] duration metric: took 4.049489301s to provisionDockerMachine
	I0110 02:25:09.693376  317309 client.go:176] duration metric: took 9.106187333s to LocalClient.Create
	I0110 02:25:09.693391  317309 start.go:167] duration metric: took 9.106245033s to libmachine.API.Create "default-k8s-diff-port-313784"
	I0110 02:25:09.693398  317309 start.go:293] postStartSetup for "default-k8s-diff-port-313784" (driver="docker")
	I0110 02:25:09.693406  317309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:25:09.693467  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:25:09.693512  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.711811  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.808337  317309 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:25:09.811710  317309 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:25:09.811743  317309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:25:09.811753  317309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:25:09.811808  317309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:25:09.811948  317309 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:25:09.812067  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:25:09.819116  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:09.838594  317309 start.go:296] duration metric: took 145.186868ms for postStartSetup
	I0110 02:25:09.838928  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.859826  317309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:25:09.860096  317309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:25:09.860157  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.879278  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.969286  317309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:25:09.973716  317309 start.go:128] duration metric: took 9.388381319s to createHost
	I0110 02:25:09.973739  317309 start.go:83] releasing machines lock for "default-k8s-diff-port-313784", held for 9.388488398s
	I0110 02:25:09.973810  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.991687  317309 ssh_runner.go:195] Run: cat /version.json
	I0110 02:25:09.991749  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.991769  317309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:25:09.991838  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:10.011173  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:10.011613  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:10.164201  317309 ssh_runner.go:195] Run: systemctl --version
	I0110 02:25:10.170999  317309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:25:10.203430  317309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:25:10.207806  317309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:25:10.207865  317309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:25:10.232145  317309 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0110 02:25:10.232171  317309 start.go:496] detecting cgroup driver to use...
	I0110 02:25:10.232201  317309 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:25:10.232258  317309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:25:10.248036  317309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:25:10.259670  317309 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:25:10.259715  317309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:25:10.274091  317309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:25:10.290721  317309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:25:10.373029  317309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:25:10.457304  317309 docker.go:234] disabling docker service ...
	I0110 02:25:10.457371  317309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:25:10.476542  317309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:25:10.488631  317309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:25:10.575393  317309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:25:10.659386  317309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:25:10.671564  317309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:25:10.685227  317309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:25:10.685293  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.694896  317309 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:25:10.694952  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.703294  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.711254  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.720324  317309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:25:10.728042  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.736233  317309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.750149  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.758257  317309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:25:10.765513  317309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:25:10.772256  317309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:10.856728  317309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:25:11.009116  317309 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:25:11.009174  317309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:25:11.012991  317309 start.go:574] Will wait 60s for crictl version
	I0110 02:25:11.013051  317309 ssh_runner.go:195] Run: which crictl
	I0110 02:25:11.016382  317309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:25:11.040332  317309 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:25:11.040407  317309 ssh_runner.go:195] Run: crio --version
	I0110 02:25:11.068000  317309 ssh_runner.go:195] Run: crio --version
	I0110 02:25:11.096409  317309 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:25:07.793746  303444 node_ready.go:49] node "no-preload-190877" is "Ready"
	I0110 02:25:07.793778  303444 node_ready.go:38] duration metric: took 12.503505454s for node "no-preload-190877" to be "Ready" ...
	I0110 02:25:07.793798  303444 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:07.793839  303444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:07.807490  303444 api_server.go:72] duration metric: took 12.881857064s to wait for apiserver process to appear ...
	I0110 02:25:07.807521  303444 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:07.807542  303444 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:25:07.812838  303444 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:25:07.813832  303444 api_server.go:141] control plane version: v1.35.0
	I0110 02:25:07.813857  303444 api_server.go:131] duration metric: took 6.328629ms to wait for apiserver health ...
	I0110 02:25:07.813865  303444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:07.817658  303444 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:07.817700  303444 system_pods.go:61] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:07.817708  303444 system_pods.go:61] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:07.817717  303444 system_pods.go:61] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:07.817723  303444 system_pods.go:61] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:07.817735  303444 system_pods.go:61] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:07.817740  303444 system_pods.go:61] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:07.817751  303444 system_pods.go:61] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:07.817757  303444 system_pods.go:61] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:07.817766  303444 system_pods.go:74] duration metric: took 3.894183ms to wait for pod list to return data ...
	I0110 02:25:07.817781  303444 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:07.820368  303444 default_sa.go:45] found service account: "default"
	I0110 02:25:07.820384  303444 default_sa.go:55] duration metric: took 2.597024ms for default service account to be created ...
	I0110 02:25:07.820392  303444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:07.823093  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:07.823127  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:07.823135  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:07.823144  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:07.823150  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:07.823160  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:07.823165  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:07.823171  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:07.823178  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:07.823205  303444 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:08.059368  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.059401  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.059407  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.059414  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.059418  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.059427  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.059435  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.059442  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.059452  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:08.434364  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.434398  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.434403  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.434408  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.434412  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.434418  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.434422  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.434432  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.434437  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:08.777704  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.777740  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.777749  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.777758  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.777764  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.777773  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.777780  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.777786  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.777797  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:09.321874  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:09.321923  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Running
	I0110 02:25:09.321932  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:09.321937  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:09.321941  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:09.321948  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:09.321953  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:09.321957  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:09.321960  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Running
	I0110 02:25:09.321967  303444 system_pods.go:126] duration metric: took 1.501570018s to wait for k8s-apps to be running ...
	I0110 02:25:09.321974  303444 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:09.322014  303444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:09.334169  303444 system_svc.go:56] duration metric: took 12.189212ms WaitForService to wait for kubelet
	I0110 02:25:09.334193  303444 kubeadm.go:587] duration metric: took 14.408567523s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:09.334210  303444 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:09.336739  303444 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:09.336760  303444 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:09.336772  303444 node_conditions.go:105] duration metric: took 2.556902ms to run NodePressure ...
	I0110 02:25:09.336780  303444 start.go:242] waiting for startup goroutines ...
	I0110 02:25:09.336787  303444 start.go:247] waiting for cluster config update ...
	I0110 02:25:09.336806  303444 start.go:256] writing updated cluster config ...
	I0110 02:25:09.337056  303444 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:09.340676  303444 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:09.343942  303444 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xrkw6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.347761  303444 pod_ready.go:94] pod "coredns-7d764666f9-xrkw6" is "Ready"
	I0110 02:25:09.347777  303444 pod_ready.go:86] duration metric: took 3.816158ms for pod "coredns-7d764666f9-xrkw6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.349481  303444 pod_ready.go:83] waiting for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.352998  303444 pod_ready.go:94] pod "etcd-no-preload-190877" is "Ready"
	I0110 02:25:09.353017  303444 pod_ready.go:86] duration metric: took 3.510866ms for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.354760  303444 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.358139  303444 pod_ready.go:94] pod "kube-apiserver-no-preload-190877" is "Ready"
	I0110 02:25:09.358160  303444 pod_ready.go:86] duration metric: took 3.382821ms for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.360074  303444 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:10.545355  303444 pod_ready.go:94] pod "kube-controller-manager-no-preload-190877" is "Ready"
	I0110 02:25:10.545386  303444 pod_ready.go:86] duration metric: took 1.185293683s for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:10.744868  303444 pod_ready.go:83] waiting for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.145108  303444 pod_ready.go:94] pod "kube-proxy-hrztb" is "Ready"
	I0110 02:25:11.145138  303444 pod_ready.go:86] duration metric: took 400.191312ms for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.345216  303444 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.744837  303444 pod_ready.go:94] pod "kube-scheduler-no-preload-190877" is "Ready"
	I0110 02:25:11.744864  303444 pod_ready.go:86] duration metric: took 399.621321ms for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.744879  303444 pod_ready.go:40] duration metric: took 2.404179584s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:11.792298  303444 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:25:11.794626  303444 out.go:179] * Done! kubectl is now configured to use "no-preload-190877" cluster and "default" namespace by default
	I0110 02:25:11.097540  317309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:25:11.115277  317309 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0110 02:25:11.119154  317309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:25:11.129083  317309 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:25:11.129199  317309 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:11.129247  317309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:25:11.161103  317309 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:25:11.161119  317309 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:25:11.161160  317309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:25:11.186578  317309 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:25:11.186598  317309 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:25:11.186608  317309 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.35.0 crio true true} ...
	I0110 02:25:11.186713  317309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-313784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:25:11.186774  317309 ssh_runner.go:195] Run: crio config
	I0110 02:25:11.230717  317309 cni.go:84] Creating CNI manager for ""
	I0110 02:25:11.230737  317309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:11.230753  317309 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:25:11.230774  317309 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313784 NodeName:default-k8s-diff-port-313784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:25:11.230907  317309 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313784"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:25:11.230962  317309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:25:11.238981  317309 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:25:11.239038  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:25:11.246847  317309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 02:25:11.259095  317309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:25:11.273622  317309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 02:25:11.286288  317309 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:25:11.289876  317309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:25:11.299533  317309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:11.381376  317309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:25:11.406213  317309 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784 for IP: 192.168.94.2
	I0110 02:25:11.406233  317309 certs.go:195] generating shared ca certs ...
	I0110 02:25:11.406251  317309 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.406423  317309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:25:11.406477  317309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:25:11.406492  317309 certs.go:257] generating profile certs ...
	I0110 02:25:11.406558  317309 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.key
	I0110 02:25:11.406585  317309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.crt with IP's: []
	I0110 02:25:11.438540  317309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.crt ...
	I0110 02:25:11.438563  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.crt: {Name:mke2a6975fe8bc62e5113e69fe3c10eb12fbe4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.438727  317309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.key ...
	I0110 02:25:11.438739  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.key: {Name:mk6a761fe0eff927e997500da7c44716f67ecd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.438818  317309 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d
	I0110 02:25:11.438835  317309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt.9158e13d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0110 02:25:11.490606  317309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt.9158e13d ...
	I0110 02:25:11.490630  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt.9158e13d: {Name:mk3ef9b9973675767cca9b7b4bcade81137f023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.490783  317309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d ...
	I0110 02:25:11.490796  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d: {Name:mk3524da7ac9af860934d643e385dc84a373ae15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.490868  317309 certs.go:382] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt.9158e13d -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt
	I0110 02:25:11.490961  317309 certs.go:386] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key
	I0110 02:25:11.491017  317309 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key
	I0110 02:25:11.491032  317309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt with IP's: []
	I0110 02:25:11.658576  317309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt ...
	I0110 02:25:11.658603  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt: {Name:mk3aa863dd03bc6be948618b1e671f9fc4de5e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.658783  317309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key ...
	I0110 02:25:11.658801  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key: {Name:mka9cf14994a790dbbddb5cf2ff304a71b140467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.659069  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:25:11.659114  317309 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:25:11.659127  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:25:11.659154  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:25:11.659190  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:25:11.659228  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:25:11.659276  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:11.659854  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:25:11.677495  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:25:11.694556  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:25:11.711090  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:25:11.727796  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 02:25:11.746123  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:25:11.763844  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:25:11.781120  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:25:11.800329  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:25:11.822269  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:25:11.840541  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:25:11.858531  317309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:25:11.871499  317309 ssh_runner.go:195] Run: openssl version
	I0110 02:25:11.878826  317309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:25:11.886909  317309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:25:11.894523  317309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:25:11.898120  317309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:25:11.898176  317309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:25:11.936329  317309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:25:11.945335  317309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14086.pem /etc/ssl/certs/51391683.0
	I0110 02:25:11.955083  317309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:25:11.965290  317309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:25:11.974103  317309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:25:11.977960  317309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:25:11.978004  317309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:25:12.016720  317309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:25:12.024643  317309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/140862.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:25:12.032212  317309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:12.041026  317309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:25:12.049812  317309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:12.053935  317309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:12.053982  317309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:12.093213  317309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:25:12.101581  317309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:25:12.110131  317309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:25:12.114044  317309 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:25:12.114096  317309 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:12.114177  317309 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:25:12.114233  317309 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:25:12.144583  317309 cri.go:96] found id: ""
	I0110 02:25:12.144649  317309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:25:12.154133  317309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:25:12.162231  317309 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:25:12.162284  317309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:25:12.170208  317309 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:25:12.170234  317309 kubeadm.go:158] found existing configuration files:
	
	I0110 02:25:12.170277  317309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0110 02:25:12.178226  317309 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:25:12.178267  317309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:25:12.185516  317309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0110 02:25:12.193256  317309 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:25:12.193304  317309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:25:12.200226  317309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0110 02:25:12.207540  317309 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:25:12.207578  317309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:25:12.214480  317309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0110 02:25:12.221509  317309 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:25:12.221555  317309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:25:12.228482  317309 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:25:12.343966  317309 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0110 02:25:12.409151  317309 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Jan 10 02:25:05 embed-certs-872415 crio[767]: time="2026-01-10T02:25:05.434456274Z" level=info msg="Starting container: db2ded76ce1b247cdbc8dd35a03bd4e58604478bcaf843411b0b16df43296bce" id=69d6e890-8d42-43c4-bfdf-376e40573892 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:25:05 embed-certs-872415 crio[767]: time="2026-01-10T02:25:05.436569691Z" level=info msg="Started container" PID=1873 containerID=db2ded76ce1b247cdbc8dd35a03bd4e58604478bcaf843411b0b16df43296bce description=kube-system/coredns-7d764666f9-lfdgm/coredns id=69d6e890-8d42-43c4-bfdf-376e40573892 name=/runtime.v1.RuntimeService/StartContainer sandboxID=66a33919ba3cc8f35f686c19b6987711d596ec3a524d61e864de4cd08c92da5f
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.181799526Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6637d613-4238-43b0-bede-b5e5ba908874 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.181864226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.186605795Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1ae5a1cbc872ce26c7768b6e91065b8b46c897b074547b73244d7eb108955107 UID:bd50d3b2-8ab9-4ef9-9105-c46448470074 NetNS:/var/run/netns/e7905ad7-45c6-4986-a20e-52e20d79fdb0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000994df8}] Aliases:map[]}"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.18663066Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.202128263Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1ae5a1cbc872ce26c7768b6e91065b8b46c897b074547b73244d7eb108955107 UID:bd50d3b2-8ab9-4ef9-9105-c46448470074 NetNS:/var/run/netns/e7905ad7-45c6-4986-a20e-52e20d79fdb0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000994df8}] Aliases:map[]}"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.20226965Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.203134292Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.204307518Z" level=info msg="Ran pod sandbox 1ae5a1cbc872ce26c7768b6e91065b8b46c897b074547b73244d7eb108955107 with infra container: default/busybox/POD" id=6637d613-4238-43b0-bede-b5e5ba908874 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.205590746Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=95e80a65-26ba-41b5-bdad-2966577524a2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.205747724Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=95e80a65-26ba-41b5-bdad-2966577524a2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.205826479Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=95e80a65-26ba-41b5-bdad-2966577524a2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.206528121Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d3f9aef6-5695-4697-97c9-2797f5a54434 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.20684016Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.848240633Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=d3f9aef6-5695-4697-97c9-2797f5a54434 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.848839008Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f4543a8d-97d5-4313-971a-33d445cc38d7 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.850668605Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fd5facd0-ca23-4b4b-8f4a-c886abbd77db name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.853849142Z" level=info msg="Creating container: default/busybox/busybox" id=aa5cbe13-a595-4ce2-8723-b1deb8aba13d name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.854003057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.857375491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.85788085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.895244385Z" level=info msg="Created container 5cda5b439011d789c6ad59aa4e7e71fb025e2f177debc3cbf382b084b4d25461: default/busybox/busybox" id=aa5cbe13-a595-4ce2-8723-b1deb8aba13d name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.895820667Z" level=info msg="Starting container: 5cda5b439011d789c6ad59aa4e7e71fb025e2f177debc3cbf382b084b4d25461" id=3b39448e-e624-4afd-99d4-83bd6af70ac1 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:25:08 embed-certs-872415 crio[767]: time="2026-01-10T02:25:08.89779Z" level=info msg="Started container" PID=1952 containerID=5cda5b439011d789c6ad59aa4e7e71fb025e2f177debc3cbf382b084b4d25461 description=default/busybox/busybox id=3b39448e-e624-4afd-99d4-83bd6af70ac1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ae5a1cbc872ce26c7768b6e91065b8b46c897b074547b73244d7eb108955107
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	5cda5b439011d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1ae5a1cbc872c       busybox                                      default
	db2ded76ce1b2       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   66a33919ba3cc       coredns-7d764666f9-lfdgm                     kube-system
	5d760292c4196       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   3646515b3c667       storage-provisioner                          kube-system
	629618cd75d0a       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   c60973f4a656a       kindnet-jkqz7                                kube-system
	1ffbdf281e0b7       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      24 seconds ago      Running             kube-proxy                0                   a62bb808a0e75       kube-proxy-47n8d                             kube-system
	98c7663e5bf7f       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      34 seconds ago      Running             kube-scheduler            0                   fd27446bc173f       kube-scheduler-embed-certs-872415            kube-system
	94d3c793deadb       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      34 seconds ago      Running             kube-controller-manager   0                   6e1a512632377       kube-controller-manager-embed-certs-872415   kube-system
	bdca50a9494d0       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      34 seconds ago      Running             etcd                      0                   efdb0fda15982       etcd-embed-certs-872415                      kube-system
	aabc0a160e54a       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      34 seconds ago      Running             kube-apiserver            0                   84ca11fb5e450       kube-apiserver-embed-certs-872415            kube-system
	
	
	==> coredns [db2ded76ce1b247cdbc8dd35a03bd4e58604478bcaf843411b0b16df43296bce] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59936 - 59484 "HINFO IN 8751111709750626124.4026329975428115869. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.457670158s
	
	
	==> describe nodes <==
	Name:               embed-certs-872415
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-872415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=embed-certs-872415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-872415
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:25:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:25:05 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:25:05 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:25:05 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:25:05 +0000   Sat, 10 Jan 2026 02:25:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-872415
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                2240aa56-47aa-4229-8a1a-8150a18d3a1e
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-lfdgm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-872415                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-jkqz7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-872415             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-872415    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-47n8d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-872415             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node embed-certs-872415 event: Registered Node embed-certs-872415 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [bdca50a9494d03d8646da1cb6b0e926f7402f367a72f0d2141d6c3a41cb35502] <==
	{"level":"info","ts":"2026-01-10T02:24:43.008697Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:24:43.100361Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:24:43.100464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:24:43.100545Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2026-01-10T02:24:43.100571Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:24:43.100593Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:43.101164Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:43.101250Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:24:43.101307Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:43.101323Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:43.102289Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-872415 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:24:43.102335Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:43.102443Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:43.102572Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:43.102974Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:43.103051Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:43.103196Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:43.103320Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:43.103366Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:43.103410Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:24:43.103520Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:24:43.103820Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:24:43.103852Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:24:43.107457Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2026-01-10T02:24:43.107810Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:25:17 up  1:07,  0 user,  load average: 3.93, 3.50, 2.27
	Linux embed-certs-872415 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [629618cd75d0a0993af51fe06f949e918edf5eb797625c1724976cb6d462811e] <==
	I0110 02:24:54.380459       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:24:54.380852       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0110 02:24:54.382465       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:24:54.382502       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:24:54.382526       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:24:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:24:54.586570       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:24:54.586601       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:24:54.586612       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:24:54.586709       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:24:54.947555       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:24:54.947711       1 metrics.go:72] Registering metrics
	I0110 02:24:54.947880       1 controller.go:711] "Syncing nftables rules"
	I0110 02:25:04.587071       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:25:04.587237       1 main.go:301] handling current node
	I0110 02:25:14.590972       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:25:14.591014       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aabc0a160e54a9b6f0ac71095530367ccbd5876cdc42819257e37fb5177fc5ca] <==
	I0110 02:24:44.277624       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 02:24:44.277662       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	E0110 02:24:44.278400       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0110 02:24:44.279101       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:24:44.283696       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:24:44.292317       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:24:44.482007       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:24:45.187947       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:24:45.191624       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:24:45.191651       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:24:45.703470       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:24:45.763914       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:24:45.886146       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:24:45.893130       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0110 02:24:45.894472       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:24:45.899772       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:24:46.235661       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:24:47.060387       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:24:47.072313       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:24:47.081729       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:24:51.887127       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:24:51.938992       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:24:51.943571       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:24:52.239027       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E0110 02:25:15.975425       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:56132: use of closed network connection
	
	
	==> kube-controller-manager [94d3c793deadb392de7d1ad2e4fddefe7e1604ee745ab879df0c16e3e9a487d2] <==
	I0110 02:24:51.055396       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.056203       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.056385       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.056546       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.057008       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.057105       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:24:51.057183       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.057197       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-872415"
	I0110 02:24:51.057260       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:24:51.057410       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.057417       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.057445       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.059063       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.059599       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.055714       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.062335       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.055724       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.062826       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.059433       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.075206       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-872415" podCIDRs=["10.244.0.0/24"]
	I0110 02:24:51.150828       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.157021       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:51.157041       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:24:51.157048       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:25:06.059383       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [1ffbdf281e0b7628610e38d006d8b374c4feb0bd2b5db4f43417a03308a7a9f9] <==
	I0110 02:24:52.399786       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:24:52.496343       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:24:52.597352       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:52.597393       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0110 02:24:52.597512       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:24:52.616092       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:24:52.616150       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:24:52.622051       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:24:52.622447       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:24:52.622471       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:24:52.624193       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:24:52.624245       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:24:52.624254       1 config.go:200] "Starting service config controller"
	I0110 02:24:52.624274       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:24:52.624343       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:24:52.624380       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:24:52.624704       1 config.go:309] "Starting node config controller"
	I0110 02:24:52.624742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:24:52.624751       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:24:52.725213       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:24:52.725249       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:24:52.725259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [98c7663e5bf7fd923f792e5a6978b371238efc134263fe2a65b0c47a1aa01858] <==
	E0110 02:24:44.253599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:24:44.253720       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:24:44.253805       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:24:44.253958       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:24:44.254782       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:24:44.254808       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:24:44.254866       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:24:44.254970       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:24:44.254989       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:24:44.255049       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:24:44.255095       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:24:44.255061       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:24:44.255113       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:24:44.255154       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:24:44.255200       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:24:45.081869       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:24:45.104746       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:24:45.240166       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:24:45.243647       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:24:45.246957       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:24:45.267527       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:24:45.371102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:24:45.484820       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:24:45.533357       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I0110 02:24:47.445798       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:24:51 embed-certs-872415 kubelet[1294]: I0110 02:24:51.979133    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk6r4\" (UniqueName: \"kubernetes.io/projected/46c935a2-5370-4d15-9eb0-0b829972680c-kube-api-access-dk6r4\") pod \"kube-proxy-47n8d\" (UID: \"46c935a2-5370-4d15-9eb0-0b829972680c\") " pod="kube-system/kube-proxy-47n8d"
	Jan 10 02:24:51 embed-certs-872415 kubelet[1294]: I0110 02:24:51.979316    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a595e658-b418-4cb2-b205-4a7dccacc5a6-lib-modules\") pod \"kindnet-jkqz7\" (UID: \"a595e658-b418-4cb2-b205-4a7dccacc5a6\") " pod="kube-system/kindnet-jkqz7"
	Jan 10 02:24:51 embed-certs-872415 kubelet[1294]: I0110 02:24:51.979355    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a595e658-b418-4cb2-b205-4a7dccacc5a6-xtables-lock\") pod \"kindnet-jkqz7\" (UID: \"a595e658-b418-4cb2-b205-4a7dccacc5a6\") " pod="kube-system/kindnet-jkqz7"
	Jan 10 02:24:51 embed-certs-872415 kubelet[1294]: I0110 02:24:51.979378    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlx2\" (UniqueName: \"kubernetes.io/projected/a595e658-b418-4cb2-b205-4a7dccacc5a6-kube-api-access-mvlx2\") pod \"kindnet-jkqz7\" (UID: \"a595e658-b418-4cb2-b205-4a7dccacc5a6\") " pod="kube-system/kindnet-jkqz7"
	Jan 10 02:24:54 embed-certs-872415 kubelet[1294]: E0110 02:24:54.581880    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-872415" containerName="kube-apiserver"
	Jan 10 02:24:54 embed-certs-872415 kubelet[1294]: I0110 02:24:54.594565    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-47n8d" podStartSLOduration=3.594544766 podStartE2EDuration="3.594544766s" podCreationTimestamp="2026-01-10 02:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:24:53.026687359 +0000 UTC m=+6.188552401" watchObservedRunningTime="2026-01-10 02:24:54.594544766 +0000 UTC m=+7.756409809"
	Jan 10 02:24:56 embed-certs-872415 kubelet[1294]: I0110 02:24:56.985120    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-jkqz7" podStartSLOduration=4.136029386 podStartE2EDuration="5.985101323s" podCreationTimestamp="2026-01-10 02:24:51 +0000 UTC" firstStartedPulling="2026-01-10 02:24:52.22438298 +0000 UTC m=+5.386248014" lastFinishedPulling="2026-01-10 02:24:54.073454918 +0000 UTC m=+7.235319951" observedRunningTime="2026-01-10 02:24:55.031037182 +0000 UTC m=+8.192902225" watchObservedRunningTime="2026-01-10 02:24:56.985101323 +0000 UTC m=+10.146966365"
	Jan 10 02:24:57 embed-certs-872415 kubelet[1294]: E0110 02:24:57.332869    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-872415" containerName="kube-scheduler"
	Jan 10 02:24:58 embed-certs-872415 kubelet[1294]: E0110 02:24:58.238274    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-872415" containerName="etcd"
	Jan 10 02:24:58 embed-certs-872415 kubelet[1294]: E0110 02:24:58.565350    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-872415" containerName="kube-controller-manager"
	Jan 10 02:24:59 embed-certs-872415 kubelet[1294]: E0110 02:24:59.017151    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-872415" containerName="etcd"
	Jan 10 02:25:04 embed-certs-872415 kubelet[1294]: E0110 02:25:04.587618    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-872415" containerName="kube-apiserver"
	Jan 10 02:25:05 embed-certs-872415 kubelet[1294]: I0110 02:25:05.036071    1294 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 02:25:05 embed-certs-872415 kubelet[1294]: I0110 02:25:05.172393    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3924ddbe-72a5-44c0-8f2c-c2af0f54fc11-tmp\") pod \"storage-provisioner\" (UID: \"3924ddbe-72a5-44c0-8f2c-c2af0f54fc11\") " pod="kube-system/storage-provisioner"
	Jan 10 02:25:05 embed-certs-872415 kubelet[1294]: I0110 02:25:05.172448    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrtkt\" (UniqueName: \"kubernetes.io/projected/fcf82466-c853-422e-a9f0-cc536a0b4c8f-kube-api-access-rrtkt\") pod \"coredns-7d764666f9-lfdgm\" (UID: \"fcf82466-c853-422e-a9f0-cc536a0b4c8f\") " pod="kube-system/coredns-7d764666f9-lfdgm"
	Jan 10 02:25:05 embed-certs-872415 kubelet[1294]: I0110 02:25:05.172558    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg2f4\" (UniqueName: \"kubernetes.io/projected/3924ddbe-72a5-44c0-8f2c-c2af0f54fc11-kube-api-access-qg2f4\") pod \"storage-provisioner\" (UID: \"3924ddbe-72a5-44c0-8f2c-c2af0f54fc11\") " pod="kube-system/storage-provisioner"
	Jan 10 02:25:05 embed-certs-872415 kubelet[1294]: I0110 02:25:05.172617    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcf82466-c853-422e-a9f0-cc536a0b4c8f-config-volume\") pod \"coredns-7d764666f9-lfdgm\" (UID: \"fcf82466-c853-422e-a9f0-cc536a0b4c8f\") " pod="kube-system/coredns-7d764666f9-lfdgm"
	Jan 10 02:25:06 embed-certs-872415 kubelet[1294]: E0110 02:25:06.033158    1294 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lfdgm" containerName="coredns"
	Jan 10 02:25:06 embed-certs-872415 kubelet[1294]: I0110 02:25:06.044755    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-lfdgm" podStartSLOduration=14.044732051 podStartE2EDuration="14.044732051s" podCreationTimestamp="2026-01-10 02:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:06.04458749 +0000 UTC m=+19.206452550" watchObservedRunningTime="2026-01-10 02:25:06.044732051 +0000 UTC m=+19.206597094"
	Jan 10 02:25:06 embed-certs-872415 kubelet[1294]: I0110 02:25:06.055620    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.055600363 podStartE2EDuration="14.055600363s" podCreationTimestamp="2026-01-10 02:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:06.055146801 +0000 UTC m=+19.217011842" watchObservedRunningTime="2026-01-10 02:25:06.055600363 +0000 UTC m=+19.217465403"
	Jan 10 02:25:07 embed-certs-872415 kubelet[1294]: E0110 02:25:07.037004    1294 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lfdgm" containerName="coredns"
	Jan 10 02:25:07 embed-certs-872415 kubelet[1294]: E0110 02:25:07.337695    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-872415" containerName="kube-scheduler"
	Jan 10 02:25:07 embed-certs-872415 kubelet[1294]: I0110 02:25:07.989778    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6fh\" (UniqueName: \"kubernetes.io/projected/bd50d3b2-8ab9-4ef9-9105-c46448470074-kube-api-access-8d6fh\") pod \"busybox\" (UID: \"bd50d3b2-8ab9-4ef9-9105-c46448470074\") " pod="default/busybox"
	Jan 10 02:25:08 embed-certs-872415 kubelet[1294]: E0110 02:25:08.039495    1294 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lfdgm" containerName="coredns"
	Jan 10 02:25:09 embed-certs-872415 kubelet[1294]: I0110 02:25:09.052725    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.409126529 podStartE2EDuration="2.052707732s" podCreationTimestamp="2026-01-10 02:25:07 +0000 UTC" firstStartedPulling="2026-01-10 02:25:08.206210089 +0000 UTC m=+21.368075109" lastFinishedPulling="2026-01-10 02:25:08.849791278 +0000 UTC m=+22.011656312" observedRunningTime="2026-01-10 02:25:09.052469285 +0000 UTC m=+22.214334349" watchObservedRunningTime="2026-01-10 02:25:09.052707732 +0000 UTC m=+22.214572773"
	
	
	==> storage-provisioner [5d760292c4196be0a0cbb87e02982e3c7a8e88804c88aba08e72ec8c742443de] <==
	I0110 02:25:05.436205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:25:05.446600       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:25:05.446660       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:25:05.449271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:05.455396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:25:05.455601       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:25:05.455792       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-872415_b9ead1c4-118e-40d0-9c2b-b627099312f2!
	I0110 02:25:05.455726       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d58c82c7-cbb5-4fa4-bce6-ee7de4cc80bf", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-872415_b9ead1c4-118e-40d0-9c2b-b627099312f2 became leader
	W0110 02:25:05.460085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:05.463636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:25:05.556004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-872415_b9ead1c4-118e-40d0-9c2b-b627099312f2!
	W0110 02:25:07.466575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:07.470433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:09.473577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:09.477660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:11.481400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:11.485150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:13.488472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:13.493559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:15.498207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:15.504464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:17.507933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:17.512925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-872415 -n embed-certs-872415
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-872415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (235.044656ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:25:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-190877 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-190877 describe deploy/metrics-server -n kube-system: exit status 1 (59.728436ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-190877 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-190877
helpers_test.go:244: (dbg) docker inspect no-preload-190877:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022",
	        "Created": "2026-01-10T02:24:22.284558877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:24:22.321810685Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/hostname",
	        "HostsPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/hosts",
	        "LogPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022-json.log",
	        "Name": "/no-preload-190877",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-190877:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-190877",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022",
	                "LowerDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-190877",
	                "Source": "/var/lib/docker/volumes/no-preload-190877/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-190877",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-190877",
	                "name.minikube.sigs.k8s.io": "no-preload-190877",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "931b9cd5a7ead2fef425e10cb86b5002a8ae7514fc9fb62c2ede6741f94d0ffa",
	            "SandboxKey": "/var/run/docker/netns/931b9cd5a7ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-190877": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6a77220e3dd22bdd3789c842dfc9aca093d12a84cb3c74b1a1cb51e3e4df363",
	                    "EndpointID": "70c2784c13be646c0edb5ac40c7ce5e7aa85ae1f8838d3f407f79a0e0cd6ed77",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ea:a8:9e:d6:ee:36",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-190877",
	                        "311ec206bd98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190877 -n no-preload-190877
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-190877 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-647049 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo docker system info                                                                                                                                 │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cri-dockerd --version                                                                                                                              │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo containerd config dump                                                                                                                             │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo crio config                                                                                                                                        │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ delete  │ -p bridge-647049                                                                                                                                                         │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:25 UTC │
	│ delete  │ -p disable-driver-mounts-249405                                                                                                                                          │ disable-driver-mounts-249405 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p old-k8s-version-188604 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p embed-certs-872415 --alsologtostderr -v=3                                                                                                                             │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:25:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:25:00.392194  317309 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:25:00.392279  317309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:25:00.392283  317309 out.go:374] Setting ErrFile to fd 2...
	I0110 02:25:00.392287  317309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:25:00.392477  317309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:25:00.392976  317309 out.go:368] Setting JSON to false
	I0110 02:25:00.394269  317309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4049,"bootTime":1768007851,"procs":457,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:25:00.394317  317309 start.go:143] virtualization: kvm guest
	I0110 02:25:00.396627  317309 out.go:179] * [default-k8s-diff-port-313784] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:25:00.397957  317309 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:25:00.397970  317309 notify.go:221] Checking for updates...
	I0110 02:25:00.400242  317309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:25:00.401485  317309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:25:00.402704  317309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:25:00.406317  317309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:25:00.407356  317309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:25:00.409141  317309 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:00.409280  317309 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:00.409426  317309 config.go:182] Loaded profile config "old-k8s-version-188604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:25:00.409539  317309 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:25:00.434761  317309 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:25:00.434854  317309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:25:00.493739  317309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-10 02:25:00.484502123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:25:00.493836  317309 docker.go:319] overlay module found
	I0110 02:25:00.495405  317309 out.go:179] * Using the docker driver based on user configuration
	I0110 02:25:00.496680  317309 start.go:309] selected driver: docker
	I0110 02:25:00.496709  317309 start.go:928] validating driver "docker" against <nil>
	I0110 02:25:00.496736  317309 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:25:00.497259  317309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:25:00.556702  317309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-10 02:25:00.547232481 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:25:00.556869  317309 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:25:00.557101  317309 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:00.558714  317309 out.go:179] * Using Docker driver with root privileges
	I0110 02:25:00.559916  317309 cni.go:84] Creating CNI manager for ""
	I0110 02:25:00.559994  317309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:00.560008  317309 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:25:00.560077  317309 start.go:353] cluster config:
	{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:00.561374  317309 out.go:179] * Starting "default-k8s-diff-port-313784" primary control-plane node in "default-k8s-diff-port-313784" cluster
	I0110 02:25:00.562518  317309 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:25:00.563663  317309 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:25:00.564638  317309 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:00.564665  317309 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:25:00.564674  317309 cache.go:65] Caching tarball of preloaded images
	I0110 02:25:00.564733  317309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:25:00.564744  317309 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:25:00.564755  317309 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:25:00.564846  317309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:25:00.564875  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json: {Name:mke69fde4131df0a8ccfd9b1b2b8ce80d8f28b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:00.585110  317309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:25:00.585128  317309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:25:00.585142  317309 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:25:00.585165  317309 start.go:360] acquireMachinesLock for default-k8s-diff-port-313784: {Name:mk0116f4190c69f6825824fe0766dd2c4c328e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:00.585241  317309 start.go:364] duration metric: took 62.883µs to acquireMachinesLock for "default-k8s-diff-port-313784"
	I0110 02:25:00.585269  317309 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:25:00.585318  317309 start.go:125] createHost starting for "" (driver="docker")
	W0110 02:24:56.474826  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:24:58.973310  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:24:57.293595  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:24:59.793590  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:00.318770  298671 node_ready.go:57] node "old-k8s-version-188604" has "Ready":"False" status (will retry)
	I0110 02:25:02.319491  298671 node_ready.go:49] node "old-k8s-version-188604" is "Ready"
	I0110 02:25:02.319522  298671 node_ready.go:38] duration metric: took 13.504308579s for node "old-k8s-version-188604" to be "Ready" ...
	I0110 02:25:02.319539  298671 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:02.319592  298671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:02.333933  298671 api_server.go:72] duration metric: took 14.03338025s to wait for apiserver process to appear ...
	I0110 02:25:02.333964  298671 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:02.333988  298671 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:25:02.340732  298671 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:25:02.341897  298671 api_server.go:141] control plane version: v1.28.0
	I0110 02:25:02.341923  298671 api_server.go:131] duration metric: took 7.952397ms to wait for apiserver health ...
	I0110 02:25:02.341931  298671 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:02.345474  298671 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:02.345511  298671 system_pods.go:61] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.345517  298671 system_pods.go:61] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.345522  298671 system_pods.go:61] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.345528  298671 system_pods.go:61] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.345535  298671 system_pods.go:61] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.345538  298671 system_pods.go:61] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.345541  298671 system_pods.go:61] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.345546  298671 system_pods.go:61] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.345553  298671 system_pods.go:74] duration metric: took 3.616799ms to wait for pod list to return data ...
	I0110 02:25:02.345561  298671 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:02.347940  298671 default_sa.go:45] found service account: "default"
	I0110 02:25:02.347962  298671 default_sa.go:55] duration metric: took 2.394187ms for default service account to be created ...
	I0110 02:25:02.347972  298671 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:02.351378  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.351406  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.351414  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.351422  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.351428  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.351434  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.351439  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.351445  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.351454  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.351482  298671 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:02.552922  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.552955  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:02.552963  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.552971  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.552975  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.552979  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.552983  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.552994  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.553002  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:02.856387  298671 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:02.856413  298671 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Running
	I0110 02:25:02.856419  298671 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running
	I0110 02:25:02.856422  298671 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running
	I0110 02:25:02.856426  298671 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running
	I0110 02:25:02.856430  298671 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running
	I0110 02:25:02.856435  298671 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running
	I0110 02:25:02.856440  298671 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running
	I0110 02:25:02.856445  298671 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Running
	I0110 02:25:02.856454  298671 system_pods.go:126] duration metric: took 508.475351ms to wait for k8s-apps to be running ...
	I0110 02:25:02.856475  298671 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:02.856532  298671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:02.869806  298671 system_svc.go:56] duration metric: took 13.330594ms WaitForService to wait for kubelet
	I0110 02:25:02.869835  298671 kubeadm.go:587] duration metric: took 14.569287464s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:02.869850  298671 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:02.872557  298671 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:02.872579  298671 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:02.872592  298671 node_conditions.go:105] duration metric: took 2.737302ms to run NodePressure ...
	I0110 02:25:02.872603  298671 start.go:242] waiting for startup goroutines ...
	I0110 02:25:02.872610  298671 start.go:247] waiting for cluster config update ...
	I0110 02:25:02.872619  298671 start.go:256] writing updated cluster config ...
	I0110 02:25:02.872932  298671 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:02.876611  298671 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:02.881253  298671 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.885713  298671 pod_ready.go:94] pod "coredns-5dd5756b68-vc68c" is "Ready"
	I0110 02:25:02.885731  298671 pod_ready.go:86] duration metric: took 4.45863ms for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.889064  298671 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.893600  298671 pod_ready.go:94] pod "etcd-old-k8s-version-188604" is "Ready"
	I0110 02:25:02.893623  298671 pod_ready.go:86] duration metric: took 4.538704ms for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.895829  298671 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.899591  298671 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-188604" is "Ready"
	I0110 02:25:02.899611  298671 pod_ready.go:86] duration metric: took 3.76343ms for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:02.902330  298671 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.397653  298671 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-188604" is "Ready"
	I0110 02:25:03.397681  298671 pod_ready.go:86] duration metric: took 495.334365ms for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.681351  298671 pod_ready.go:83] waiting for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:03.881037  298671 pod_ready.go:94] pod "kube-proxy-c445q" is "Ready"
	I0110 02:25:03.881067  298671 pod_ready.go:86] duration metric: took 199.676144ms for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.081651  298671 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.481407  298671 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-188604" is "Ready"
	I0110 02:25:04.481434  298671 pod_ready.go:86] duration metric: took 399.75895ms for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:04.481446  298671 pod_ready.go:40] duration metric: took 1.604804736s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:04.526151  298671 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I0110 02:25:04.588428  298671 out.go:203] 
	W0110 02:25:04.608947  298671 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 02:25:04.621561  298671 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:25:04.623417  298671 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-188604" cluster and "default" namespace by default
	I0110 02:25:00.586927  317309 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:25:00.587148  317309 start.go:159] libmachine.API.Create for "default-k8s-diff-port-313784" (driver="docker")
	I0110 02:25:00.587180  317309 client.go:173] LocalClient.Create starting
	I0110 02:25:00.587290  317309 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 02:25:00.587336  317309 main.go:144] libmachine: Decoding PEM data...
	I0110 02:25:00.587362  317309 main.go:144] libmachine: Parsing certificate...
	I0110 02:25:00.587425  317309 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 02:25:00.587453  317309 main.go:144] libmachine: Decoding PEM data...
	I0110 02:25:00.587477  317309 main.go:144] libmachine: Parsing certificate...
	I0110 02:25:00.587876  317309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:25:00.603713  317309 cli_runner.go:211] docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:25:00.603783  317309 network_create.go:284] running [docker network inspect default-k8s-diff-port-313784] to gather additional debugging logs...
	I0110 02:25:00.603799  317309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784
	W0110 02:25:00.619531  317309 cli_runner.go:211] docker network inspect default-k8s-diff-port-313784 returned with exit code 1
	I0110 02:25:00.619561  317309 network_create.go:287] error running [docker network inspect default-k8s-diff-port-313784]: docker network inspect default-k8s-diff-port-313784: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-313784 not found
	I0110 02:25:00.619592  317309 network_create.go:289] output of [docker network inspect default-k8s-diff-port-313784]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-313784 not found
	
	** /stderr **
	I0110 02:25:00.619686  317309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:25:00.636089  317309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
	I0110 02:25:00.636919  317309 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6b93c57cdce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:4c:65:68:38:06} reservation:<nil>}
	I0110 02:25:00.637882  317309 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c494a40b219 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:38:5d:78:96:da} reservation:<nil>}
	I0110 02:25:00.638718  317309 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a77220e3dd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:16:c1:44:08:5d} reservation:<nil>}
	I0110 02:25:00.639454  317309 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5bb0788a00cd IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:07:16:ea:24:2b} reservation:<nil>}
	I0110 02:25:00.640360  317309 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f6fef0}
	I0110 02:25:00.640387  317309 network_create.go:124] attempt to create docker network default-k8s-diff-port-313784 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0110 02:25:00.640422  317309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 default-k8s-diff-port-313784
	I0110 02:25:00.689478  317309 network_create.go:108] docker network default-k8s-diff-port-313784 192.168.94.0/24 created
	I0110 02:25:00.689512  317309 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-313784" container
	I0110 02:25:00.689566  317309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:25:00.706880  317309 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-313784 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:25:00.724154  317309 oci.go:103] Successfully created a docker volume default-k8s-diff-port-313784
	I0110 02:25:00.724237  317309 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-313784-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --entrypoint /usr/bin/test -v default-k8s-diff-port-313784:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:25:01.125000  317309 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-313784
	I0110 02:25:01.125098  317309 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:01.125127  317309 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:25:01.125186  317309 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-313784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:25:05.011931  317309 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-313784:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.886674134s)
	I0110 02:25:05.011970  317309 kic.go:203] duration metric: took 3.886836272s to extract preloaded images to volume ...
	W0110 02:25:05.012038  317309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 02:25:05.012067  317309 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 02:25:05.012103  317309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:25:05.077023  317309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-313784 --name default-k8s-diff-port-313784 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-313784 --network default-k8s-diff-port-313784 --ip 192.168.94.2 --volume default-k8s-diff-port-313784:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:25:05.356393  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Running}}
	I0110 02:25:05.378292  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	W0110 02:25:00.975628  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	W0110 02:25:03.473338  306368 node_ready.go:57] node "embed-certs-872415" has "Ready":"False" status (will retry)
	I0110 02:25:05.473260  306368 node_ready.go:49] node "embed-certs-872415" is "Ready"
	I0110 02:25:05.473290  306368 node_ready.go:38] duration metric: took 13.003254802s for node "embed-certs-872415" to be "Ready" ...
	I0110 02:25:05.473307  306368 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:05.473367  306368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:05.487970  306368 api_server.go:72] duration metric: took 13.381037629s to wait for apiserver process to appear ...
	I0110 02:25:05.487997  306368 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:05.488020  306368 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:25:05.492948  306368 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0110 02:25:05.493802  306368 api_server.go:141] control plane version: v1.35.0
	I0110 02:25:05.493822  306368 api_server.go:131] duration metric: took 5.818404ms to wait for apiserver health ...
	I0110 02:25:05.493830  306368 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:05.497635  306368 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:05.497667  306368 system_pods.go:61] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.497675  306368 system_pods.go:61] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.497684  306368 system_pods.go:61] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.497693  306368 system_pods.go:61] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.497698  306368 system_pods.go:61] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.497703  306368 system_pods.go:61] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.497714  306368 system_pods.go:61] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.497724  306368 system_pods.go:61] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:05.497734  306368 system_pods.go:74] duration metric: took 3.897808ms to wait for pod list to return data ...
	I0110 02:25:05.497754  306368 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:05.500151  306368 default_sa.go:45] found service account: "default"
	I0110 02:25:05.500168  306368 default_sa.go:55] duration metric: took 2.404258ms for default service account to be created ...
	I0110 02:25:05.500178  306368 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:05.502707  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:05.502735  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.502743  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.502763  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.502772  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.502779  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.502788  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.502801  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.502812  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:05.502851  306368 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:05.732008  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:05.732045  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:05.732052  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:05.732058  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:05.732061  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:05.732065  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:05.732068  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:05.732077  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:05.732135  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	W0110 02:25:01.793762  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:03.803678  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	W0110 02:25:06.292883  303444 node_ready.go:57] node "no-preload-190877" has "Ready":"False" status (will retry)
	I0110 02:25:06.048077  306368 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:06.048106  306368 system_pods.go:89] "coredns-7d764666f9-lfdgm" [fcf82466-c853-422e-a9f0-cc536a0b4c8f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:06.048114  306368 system_pods.go:89] "etcd-embed-certs-872415" [a2ab9017-e53d-4c1b-a58d-b7af78ab8465] Running
	I0110 02:25:06.048120  306368 system_pods.go:89] "kindnet-jkqz7" [a595e658-b418-4cb2-b205-4a7dccacc5a6] Running
	I0110 02:25:06.048124  306368 system_pods.go:89] "kube-apiserver-embed-certs-872415" [902471b3-7d32-4f76-b216-b716515cbdbc] Running
	I0110 02:25:06.048128  306368 system_pods.go:89] "kube-controller-manager-embed-certs-872415" [7c1023aa-20cf-47a1-827e-3ee4544442ba] Running
	I0110 02:25:06.048131  306368 system_pods.go:89] "kube-proxy-47n8d" [46c935a2-5370-4d15-9eb0-0b829972680c] Running
	I0110 02:25:06.048136  306368 system_pods.go:89] "kube-scheduler-embed-certs-872415" [21c07585-db68-45cb-bb2e-32d78cc0bfd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:06.048146  306368 system_pods.go:89] "storage-provisioner" [3924ddbe-72a5-44c0-8f2c-c2af0f54fc11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:06.048157  306368 system_pods.go:126] duration metric: took 547.973131ms to wait for k8s-apps to be running ...
	I0110 02:25:06.048165  306368 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:06.048209  306368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:06.064116  306368 system_svc.go:56] duration metric: took 15.940126ms WaitForService to wait for kubelet
	I0110 02:25:06.064146  306368 kubeadm.go:587] duration metric: took 13.957218617s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:06.064170  306368 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:06.066699  306368 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:06.066724  306368 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:06.066737  306368 node_conditions.go:105] duration metric: took 2.561883ms to run NodePressure ...
	I0110 02:25:06.066749  306368 start.go:242] waiting for startup goroutines ...
	I0110 02:25:06.066755  306368 start.go:247] waiting for cluster config update ...
	I0110 02:25:06.066765  306368 start.go:256] writing updated cluster config ...
	I0110 02:25:06.067037  306368 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:06.071779  306368 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:06.075168  306368 pod_ready.go:83] waiting for pod "coredns-7d764666f9-lfdgm" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.079050  306368 pod_ready.go:94] pod "coredns-7d764666f9-lfdgm" is "Ready"
	I0110 02:25:06.079072  306368 pod_ready.go:86] duration metric: took 3.879968ms for pod "coredns-7d764666f9-lfdgm" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.081862  306368 pod_ready.go:83] waiting for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.085524  306368 pod_ready.go:94] pod "etcd-embed-certs-872415" is "Ready"
	I0110 02:25:06.085544  306368 pod_ready.go:86] duration metric: took 3.660409ms for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.176936  306368 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.181365  306368 pod_ready.go:94] pod "kube-apiserver-embed-certs-872415" is "Ready"
	I0110 02:25:06.181390  306368 pod_ready.go:86] duration metric: took 4.431195ms for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.183390  306368 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.476403  306368 pod_ready.go:94] pod "kube-controller-manager-embed-certs-872415" is "Ready"
	I0110 02:25:06.476433  306368 pod_ready.go:86] duration metric: took 293.020723ms for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:06.676583  306368 pod_ready.go:83] waiting for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.075937  306368 pod_ready.go:94] pod "kube-proxy-47n8d" is "Ready"
	I0110 02:25:07.075962  306368 pod_ready.go:86] duration metric: took 399.357725ms for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.275762  306368 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.676288  306368 pod_ready.go:94] pod "kube-scheduler-embed-certs-872415" is "Ready"
	I0110 02:25:07.676321  306368 pod_ready.go:86] duration metric: took 400.536667ms for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:07.676336  306368 pod_ready.go:40] duration metric: took 1.604527147s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:07.719956  306368 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:25:07.721934  306368 out.go:179] * Done! kubectl is now configured to use "embed-certs-872415" cluster and "default" namespace by default
	I0110 02:25:05.401124  317309 cli_runner.go:164] Run: docker exec default-k8s-diff-port-313784 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:25:05.461291  317309 oci.go:144] the created container "default-k8s-diff-port-313784" has a running status.
	I0110 02:25:05.461325  317309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa...
	I0110 02:25:05.527181  317309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:25:05.553440  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:25:05.571338  317309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:25:05.571363  317309 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-313784 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:25:05.625350  317309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:25:05.643845  317309 machine.go:94] provisionDockerMachine start ...
	I0110 02:25:05.643986  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:05.668140  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:05.668505  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:05.668526  317309 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:25:05.669701  317309 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53044->127.0.0.1:33105: read: connection reset by peer
	I0110 02:25:08.804919  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:25:08.804953  317309 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-313784"
	I0110 02:25:08.805029  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:08.830536  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:08.830857  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:08.830897  317309 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313784 && echo "default-k8s-diff-port-313784" | sudo tee /etc/hostname
	I0110 02:25:08.970898  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:25:08.970999  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:08.989931  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:08.990192  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:08.990221  317309 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313784/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:25:09.119431  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:25:09.119464  317309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:25:09.119509  317309 ubuntu.go:190] setting up certificates
	I0110 02:25:09.119530  317309 provision.go:84] configureAuth start
	I0110 02:25:09.119597  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.137764  317309 provision.go:143] copyHostCerts
	I0110 02:25:09.137826  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:25:09.137839  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:25:09.137920  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:25:09.138022  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:25:09.138036  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:25:09.138076  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:25:09.138167  317309 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:25:09.138178  317309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:25:09.138216  317309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:25:09.138278  317309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313784 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-313784 localhost minikube]
	I0110 02:25:09.239098  317309 provision.go:177] copyRemoteCerts
	I0110 02:25:09.239146  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:25:09.239181  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.257663  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.351428  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:25:09.372575  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 02:25:09.389338  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:25:09.406058  317309 provision.go:87] duration metric: took 286.505777ms to configureAuth
	I0110 02:25:09.406083  317309 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:25:09.406234  317309 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:09.406322  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.424190  317309 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:09.424409  317309 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I0110 02:25:09.424431  317309 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:25:09.693344  317309 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:25:09.693366  317309 machine.go:97] duration metric: took 4.049489301s to provisionDockerMachine
	I0110 02:25:09.693376  317309 client.go:176] duration metric: took 9.106187333s to LocalClient.Create
	I0110 02:25:09.693391  317309 start.go:167] duration metric: took 9.106245033s to libmachine.API.Create "default-k8s-diff-port-313784"
	I0110 02:25:09.693398  317309 start.go:293] postStartSetup for "default-k8s-diff-port-313784" (driver="docker")
	I0110 02:25:09.693406  317309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:25:09.693467  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:25:09.693512  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.711811  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.808337  317309 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:25:09.811710  317309 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:25:09.811743  317309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:25:09.811753  317309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:25:09.811808  317309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:25:09.811948  317309 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:25:09.812067  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:25:09.819116  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:09.838594  317309 start.go:296] duration metric: took 145.186868ms for postStartSetup
	I0110 02:25:09.838928  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.859826  317309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:25:09.860096  317309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:25:09.860157  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.879278  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:09.969286  317309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:25:09.973716  317309 start.go:128] duration metric: took 9.388381319s to createHost
	I0110 02:25:09.973739  317309 start.go:83] releasing machines lock for "default-k8s-diff-port-313784", held for 9.388488398s
	I0110 02:25:09.973810  317309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:25:09.991687  317309 ssh_runner.go:195] Run: cat /version.json
	I0110 02:25:09.991749  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:09.991769  317309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:25:09.991838  317309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:25:10.011173  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:10.011613  317309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:25:10.164201  317309 ssh_runner.go:195] Run: systemctl --version
	I0110 02:25:10.170999  317309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:25:10.203430  317309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:25:10.207806  317309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:25:10.207865  317309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:25:10.232145  317309 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0110 02:25:10.232171  317309 start.go:496] detecting cgroup driver to use...
	I0110 02:25:10.232201  317309 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:25:10.232258  317309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:25:10.248036  317309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:25:10.259670  317309 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:25:10.259715  317309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:25:10.274091  317309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:25:10.290721  317309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:25:10.373029  317309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:25:10.457304  317309 docker.go:234] disabling docker service ...
	I0110 02:25:10.457371  317309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:25:10.476542  317309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:25:10.488631  317309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:25:10.575393  317309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:25:10.659386  317309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:25:10.671564  317309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:25:10.685227  317309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:25:10.685293  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.694896  317309 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:25:10.694952  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.703294  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.711254  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.720324  317309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:25:10.728042  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.736233  317309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.750149  317309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:10.758257  317309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:25:10.765513  317309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:25:10.772256  317309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:10.856728  317309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:25:11.009116  317309 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:25:11.009174  317309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:25:11.012991  317309 start.go:574] Will wait 60s for crictl version
	I0110 02:25:11.013051  317309 ssh_runner.go:195] Run: which crictl
	I0110 02:25:11.016382  317309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:25:11.040332  317309 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:25:11.040407  317309 ssh_runner.go:195] Run: crio --version
	I0110 02:25:11.068000  317309 ssh_runner.go:195] Run: crio --version
	I0110 02:25:11.096409  317309 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:25:07.793746  303444 node_ready.go:49] node "no-preload-190877" is "Ready"
	I0110 02:25:07.793778  303444 node_ready.go:38] duration metric: took 12.503505454s for node "no-preload-190877" to be "Ready" ...
	I0110 02:25:07.793798  303444 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:07.793839  303444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:07.807490  303444 api_server.go:72] duration metric: took 12.881857064s to wait for apiserver process to appear ...
	I0110 02:25:07.807521  303444 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:07.807542  303444 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:25:07.812838  303444 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:25:07.813832  303444 api_server.go:141] control plane version: v1.35.0
	I0110 02:25:07.813857  303444 api_server.go:131] duration metric: took 6.328629ms to wait for apiserver health ...
	I0110 02:25:07.813865  303444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:07.817658  303444 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:07.817700  303444 system_pods.go:61] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:07.817708  303444 system_pods.go:61] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:07.817717  303444 system_pods.go:61] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:07.817723  303444 system_pods.go:61] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:07.817735  303444 system_pods.go:61] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:07.817740  303444 system_pods.go:61] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:07.817751  303444 system_pods.go:61] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:07.817757  303444 system_pods.go:61] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:07.817766  303444 system_pods.go:74] duration metric: took 3.894183ms to wait for pod list to return data ...
	I0110 02:25:07.817781  303444 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:07.820368  303444 default_sa.go:45] found service account: "default"
	I0110 02:25:07.820384  303444 default_sa.go:55] duration metric: took 2.597024ms for default service account to be created ...
	I0110 02:25:07.820392  303444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:07.823093  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:07.823127  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:07.823135  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:07.823144  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:07.823150  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:07.823160  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:07.823165  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:07.823171  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:07.823178  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:07.823205  303444 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:25:08.059368  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.059401  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.059407  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.059414  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.059418  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.059427  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.059435  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.059442  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.059452  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:08.434364  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.434398  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.434403  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.434408  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.434412  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.434418  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.434422  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.434432  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.434437  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:08.777704  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:08.777740  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:08.777749  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:08.777758  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:08.777764  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:08.777773  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:08.777780  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:08.777786  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:08.777797  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:09.321874  303444 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:09.321923  303444 system_pods.go:89] "coredns-7d764666f9-xrkw6" [f4cf927b-a221-4397-a974-381370fe2757] Running
	I0110 02:25:09.321932  303444 system_pods.go:89] "etcd-no-preload-190877" [3c4200a9-a4ff-4d95-bee0-f0e00cf84b82] Running
	I0110 02:25:09.321937  303444 system_pods.go:89] "kindnet-rz9kz" [83af6dd6-503a-46f4-9895-3ea6558e6206] Running
	I0110 02:25:09.321941  303444 system_pods.go:89] "kube-apiserver-no-preload-190877" [f5137011-da89-48b3-b88b-e7ee722acb0a] Running
	I0110 02:25:09.321948  303444 system_pods.go:89] "kube-controller-manager-no-preload-190877" [7dd32653-18e2-4ecd-9815-943c1684579d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:09.321953  303444 system_pods.go:89] "kube-proxy-hrztb" [4a23fad6-7698-43bf-ae75-8baf92c7f9a7] Running
	I0110 02:25:09.321957  303444 system_pods.go:89] "kube-scheduler-no-preload-190877" [f1eac3ed-f72d-41d8-9528-4feb86fc1209] Running
	I0110 02:25:09.321960  303444 system_pods.go:89] "storage-provisioner" [3d30685d-b6a9-4299-baf3-866bb7aef6b8] Running
	I0110 02:25:09.321967  303444 system_pods.go:126] duration metric: took 1.501570018s to wait for k8s-apps to be running ...
	I0110 02:25:09.321974  303444 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:09.322014  303444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:09.334169  303444 system_svc.go:56] duration metric: took 12.189212ms WaitForService to wait for kubelet
	I0110 02:25:09.334193  303444 kubeadm.go:587] duration metric: took 14.408567523s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:09.334210  303444 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:09.336739  303444 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:09.336760  303444 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:09.336772  303444 node_conditions.go:105] duration metric: took 2.556902ms to run NodePressure ...
	I0110 02:25:09.336780  303444 start.go:242] waiting for startup goroutines ...
	I0110 02:25:09.336787  303444 start.go:247] waiting for cluster config update ...
	I0110 02:25:09.336806  303444 start.go:256] writing updated cluster config ...
	I0110 02:25:09.337056  303444 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:09.340676  303444 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:09.343942  303444 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xrkw6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.347761  303444 pod_ready.go:94] pod "coredns-7d764666f9-xrkw6" is "Ready"
	I0110 02:25:09.347777  303444 pod_ready.go:86] duration metric: took 3.816158ms for pod "coredns-7d764666f9-xrkw6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.349481  303444 pod_ready.go:83] waiting for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.352998  303444 pod_ready.go:94] pod "etcd-no-preload-190877" is "Ready"
	I0110 02:25:09.353017  303444 pod_ready.go:86] duration metric: took 3.510866ms for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.354760  303444 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.358139  303444 pod_ready.go:94] pod "kube-apiserver-no-preload-190877" is "Ready"
	I0110 02:25:09.358160  303444 pod_ready.go:86] duration metric: took 3.382821ms for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:09.360074  303444 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:10.545355  303444 pod_ready.go:94] pod "kube-controller-manager-no-preload-190877" is "Ready"
	I0110 02:25:10.545386  303444 pod_ready.go:86] duration metric: took 1.185293683s for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:10.744868  303444 pod_ready.go:83] waiting for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.145108  303444 pod_ready.go:94] pod "kube-proxy-hrztb" is "Ready"
	I0110 02:25:11.145138  303444 pod_ready.go:86] duration metric: took 400.191312ms for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.345216  303444 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.744837  303444 pod_ready.go:94] pod "kube-scheduler-no-preload-190877" is "Ready"
	I0110 02:25:11.744864  303444 pod_ready.go:86] duration metric: took 399.621321ms for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:25:11.744879  303444 pod_ready.go:40] duration metric: took 2.404179584s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:11.792298  303444 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:25:11.794626  303444 out.go:179] * Done! kubectl is now configured to use "no-preload-190877" cluster and "default" namespace by default
	I0110 02:25:11.097540  317309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:25:11.115277  317309 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0110 02:25:11.119154  317309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:25:11.129083  317309 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:25:11.129199  317309 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:11.129247  317309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:25:11.161103  317309 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:25:11.161119  317309 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:25:11.161160  317309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:25:11.186578  317309 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:25:11.186598  317309 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:25:11.186608  317309 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.35.0 crio true true} ...
	I0110 02:25:11.186713  317309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-313784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:25:11.186774  317309 ssh_runner.go:195] Run: crio config
	I0110 02:25:11.230717  317309 cni.go:84] Creating CNI manager for ""
	I0110 02:25:11.230737  317309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:11.230753  317309 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:25:11.230774  317309 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313784 NodeName:default-k8s-diff-port-313784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:25:11.230907  317309 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313784"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:25:11.230962  317309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:25:11.238981  317309 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:25:11.239038  317309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:25:11.246847  317309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 02:25:11.259095  317309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:25:11.273622  317309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 02:25:11.286288  317309 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:25:11.289876  317309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:25:11.299533  317309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:11.381376  317309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:25:11.406213  317309 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784 for IP: 192.168.94.2
	I0110 02:25:11.406233  317309 certs.go:195] generating shared ca certs ...
	I0110 02:25:11.406251  317309 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.406423  317309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:25:11.406477  317309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:25:11.406492  317309 certs.go:257] generating profile certs ...
	I0110 02:25:11.406558  317309 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.key
	I0110 02:25:11.406585  317309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.crt with IP's: []
	I0110 02:25:11.438540  317309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.crt ...
	I0110 02:25:11.438563  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.crt: {Name:mke2a6975fe8bc62e5113e69fe3c10eb12fbe4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.438727  317309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.key ...
	I0110 02:25:11.438739  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.key: {Name:mk6a761fe0eff927e997500da7c44716f67ecd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.438818  317309 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d
	I0110 02:25:11.438835  317309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt.9158e13d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0110 02:25:11.490606  317309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt.9158e13d ...
	I0110 02:25:11.490630  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt.9158e13d: {Name:mk3ef9b9973675767cca9b7b4bcade81137f023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.490783  317309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d ...
	I0110 02:25:11.490796  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d: {Name:mk3524da7ac9af860934d643e385dc84a373ae15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.490868  317309 certs.go:382] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt.9158e13d -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt
	I0110 02:25:11.490961  317309 certs.go:386] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key
	I0110 02:25:11.491017  317309 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key
	I0110 02:25:11.491032  317309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt with IP's: []
	I0110 02:25:11.658576  317309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt ...
	I0110 02:25:11.658603  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt: {Name:mk3aa863dd03bc6be948618b1e671f9fc4de5e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.658783  317309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key ...
	I0110 02:25:11.658801  317309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key: {Name:mka9cf14994a790dbbddb5cf2ff304a71b140467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:11.659069  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:25:11.659114  317309 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:25:11.659127  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:25:11.659154  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:25:11.659190  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:25:11.659228  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:25:11.659276  317309 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:11.659854  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:25:11.677495  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:25:11.694556  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:25:11.711090  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:25:11.727796  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 02:25:11.746123  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:25:11.763844  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:25:11.781120  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:25:11.800329  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:25:11.822269  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:25:11.840541  317309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:25:11.858531  317309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:25:11.871499  317309 ssh_runner.go:195] Run: openssl version
	I0110 02:25:11.878826  317309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:25:11.886909  317309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:25:11.894523  317309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:25:11.898120  317309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:25:11.898176  317309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:25:11.936329  317309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:25:11.945335  317309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14086.pem /etc/ssl/certs/51391683.0
	I0110 02:25:11.955083  317309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:25:11.965290  317309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:25:11.974103  317309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:25:11.977960  317309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:25:11.978004  317309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:25:12.016720  317309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:25:12.024643  317309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/140862.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:25:12.032212  317309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:12.041026  317309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:25:12.049812  317309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:12.053935  317309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:12.053982  317309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:12.093213  317309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:25:12.101581  317309 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:25:12.110131  317309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:25:12.114044  317309 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:25:12.114096  317309 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:12.114177  317309 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:25:12.114233  317309 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:25:12.144583  317309 cri.go:96] found id: ""
	I0110 02:25:12.144649  317309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:25:12.154133  317309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:25:12.162231  317309 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:25:12.162284  317309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:25:12.170208  317309 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:25:12.170234  317309 kubeadm.go:158] found existing configuration files:
	
	I0110 02:25:12.170277  317309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0110 02:25:12.178226  317309 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:25:12.178267  317309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:25:12.185516  317309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0110 02:25:12.193256  317309 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:25:12.193304  317309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:25:12.200226  317309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0110 02:25:12.207540  317309 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:25:12.207578  317309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:25:12.214480  317309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0110 02:25:12.221509  317309 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:25:12.221555  317309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:25:12.228482  317309 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:25:12.343966  317309 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0110 02:25:12.409151  317309 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:25:18.874773  317309 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:25:18.874842  317309 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:25:18.874959  317309 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:25:18.875039  317309 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0110 02:25:18.875092  317309 kubeadm.go:319] OS: Linux
	I0110 02:25:18.875162  317309 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:25:18.875235  317309 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:25:18.875330  317309 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:25:18.875420  317309 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:25:18.875492  317309 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:25:18.875540  317309 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:25:18.875582  317309 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:25:18.875617  317309 kubeadm.go:319] CGROUPS_IO: enabled
	I0110 02:25:18.875684  317309 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:25:18.875774  317309 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:25:18.875848  317309 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:25:18.875934  317309 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:25:18.877536  317309 out.go:252]   - Generating certificates and keys ...
	I0110 02:25:18.877606  317309 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:25:18.877682  317309 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:25:18.877753  317309 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:25:18.877802  317309 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:25:18.877866  317309 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:25:18.877937  317309 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:25:18.877990  317309 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:25:18.878101  317309 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-313784 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0110 02:25:18.878156  317309 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:25:18.878363  317309 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-313784 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0110 02:25:18.878470  317309 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:25:18.878536  317309 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:25:18.878579  317309 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:25:18.878631  317309 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:25:18.878679  317309 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:25:18.878734  317309 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:25:18.878782  317309 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:25:18.878898  317309 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:25:18.878998  317309 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:25:18.879100  317309 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:25:18.879188  317309 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:25:18.880860  317309 out.go:252]   - Booting up control plane ...
	I0110 02:25:18.880965  317309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:25:18.881029  317309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:25:18.881113  317309 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:25:18.881238  317309 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:25:18.881330  317309 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:25:18.881451  317309 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:25:18.881547  317309 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:25:18.881684  317309 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:25:18.881877  317309 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:25:18.882070  317309 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:25:18.882169  317309 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.425731ms
	I0110 02:25:18.882282  317309 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:25:18.882389  317309 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I0110 02:25:18.882517  317309 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:25:18.882643  317309 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 02:25:18.882771  317309 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.008099949s
	I0110 02:25:18.882864  317309 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.930795061s
	I0110 02:25:18.882947  317309 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50167726s
	I0110 02:25:18.883054  317309 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:25:18.883166  317309 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:25:18.883217  317309 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:25:18.883412  317309 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-313784 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:25:18.883497  317309 kubeadm.go:319] [bootstrap-token] Using token: tvezr4.sr1718x55ew4ml5x
	I0110 02:25:18.884680  317309 out.go:252]   - Configuring RBAC rules ...
	I0110 02:25:18.884773  317309 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:25:18.884863  317309 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:25:18.885051  317309 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:25:18.885167  317309 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:25:18.885282  317309 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:25:18.885384  317309 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:25:18.885523  317309 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:25:18.885565  317309 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:25:18.885621  317309 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:25:18.885628  317309 kubeadm.go:319] 
	I0110 02:25:18.885706  317309 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:25:18.885720  317309 kubeadm.go:319] 
	I0110 02:25:18.885790  317309 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:25:18.885799  317309 kubeadm.go:319] 
	I0110 02:25:18.885825  317309 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:25:18.885880  317309 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:25:18.886004  317309 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:25:18.886018  317309 kubeadm.go:319] 
	I0110 02:25:18.886088  317309 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:25:18.886101  317309 kubeadm.go:319] 
	I0110 02:25:18.886176  317309 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:25:18.886184  317309 kubeadm.go:319] 
	I0110 02:25:18.886312  317309 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:25:18.886450  317309 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:25:18.886562  317309 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:25:18.886574  317309 kubeadm.go:319] 
	I0110 02:25:18.886644  317309 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:25:18.886719  317309 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:25:18.886728  317309 kubeadm.go:319] 
	I0110 02:25:18.886797  317309 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token tvezr4.sr1718x55ew4ml5x \
	I0110 02:25:18.886920  317309 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 \
	I0110 02:25:18.886970  317309 kubeadm.go:319] 	--control-plane 
	I0110 02:25:18.886979  317309 kubeadm.go:319] 
	I0110 02:25:18.887084  317309 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:25:18.887092  317309 kubeadm.go:319] 
	I0110 02:25:18.887193  317309 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token tvezr4.sr1718x55ew4ml5x \
	I0110 02:25:18.887319  317309 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 
	I0110 02:25:18.887333  317309 cni.go:84] Creating CNI manager for ""
	I0110 02:25:18.887339  317309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:18.888603  317309 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:25:18.889695  317309 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:25:18.893753  317309 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:25:18.893769  317309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:25:18.907121  317309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:25:19.116801  317309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:25:19.116861  317309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:25:19.116917  317309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-313784 minikube.k8s.io/updated_at=2026_01_10T02_25_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=default-k8s-diff-port-313784 minikube.k8s.io/primary=true
	I0110 02:25:19.126231  317309 ops.go:34] apiserver oom_adj: -16
	I0110 02:25:19.194813  317309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:25:19.695001  317309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:25:20.195036  317309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Jan 10 02:25:07 no-preload-190877 crio[773]: time="2026-01-10T02:25:07.875998475Z" level=info msg="Starting container: b05c2d26da993afb7f4aa777e522ba54b975b7e63a12c2faab445f31f54c2e03" id=e23e0a8a-2846-4e67-b875-79fd0118916f name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:25:07 no-preload-190877 crio[773]: time="2026-01-10T02:25:07.878207292Z" level=info msg="Started container" PID=2795 containerID=b05c2d26da993afb7f4aa777e522ba54b975b7e63a12c2faab445f31f54c2e03 description=kube-system/coredns-7d764666f9-xrkw6/coredns id=e23e0a8a-2846-4e67-b875-79fd0118916f name=/runtime.v1.RuntimeService/StartContainer sandboxID=68785a5ce81164b3a26512d6a9f2b450b4c2379d00a554b25610eb93df62bea9
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.260900018Z" level=info msg="Running pod sandbox: default/busybox/POD" id=384ef9c9-e734-48a0-8250-0be423e48881 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.260964299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.266291347Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce52e69e58aa1579199fd4e5815dae2468c5c0d7711dba21ed984647b86dac7a UID:db660e0d-265f-4939-9a77-c311c0ded30d NetNS:/var/run/netns/034509bf-9559-4c63-9b1e-b9d528412b82 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007d2a48}] Aliases:map[]}"
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.266324438Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.283500361Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce52e69e58aa1579199fd4e5815dae2468c5c0d7711dba21ed984647b86dac7a UID:db660e0d-265f-4939-9a77-c311c0ded30d NetNS:/var/run/netns/034509bf-9559-4c63-9b1e-b9d528412b82 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007d2a48}] Aliases:map[]}"
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.283726047Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.284718838Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.285780035Z" level=info msg="Ran pod sandbox ce52e69e58aa1579199fd4e5815dae2468c5c0d7711dba21ed984647b86dac7a with infra container: default/busybox/POD" id=384ef9c9-e734-48a0-8250-0be423e48881 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.287193675Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d93091e0-bdc6-4a0a-bbc5-1d2eaf2c95bc name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.287331504Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d93091e0-bdc6-4a0a-bbc5-1d2eaf2c95bc name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.287417559Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d93091e0-bdc6-4a0a-bbc5-1d2eaf2c95bc name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.288797197Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=821cd041-781c-4c6d-8766-351f7f2bc400 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.289138255Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.992768194Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=821cd041-781c-4c6d-8766-351f7f2bc400 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.993324969Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e7091773-dd08-4599-b18f-e6d55d214ad2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.995040592Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=164211b3-8cdf-4d95-8d7b-bf6818c2d2f9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.998157054Z" level=info msg="Creating container: default/busybox/busybox" id=c10e98cb-6c2d-48bc-8e46-e1c95f3443ce name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:25:12 no-preload-190877 crio[773]: time="2026-01-10T02:25:12.998298067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:13 no-preload-190877 crio[773]: time="2026-01-10T02:25:13.002534078Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:13 no-preload-190877 crio[773]: time="2026-01-10T02:25:13.002941351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:13 no-preload-190877 crio[773]: time="2026-01-10T02:25:13.025216859Z" level=info msg="Created container b8695ffe49e0dc618f15167f2dc4d7de69d8b60e6a0dd9fc2304da885ec9fa1f: default/busybox/busybox" id=c10e98cb-6c2d-48bc-8e46-e1c95f3443ce name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:25:13 no-preload-190877 crio[773]: time="2026-01-10T02:25:13.025768089Z" level=info msg="Starting container: b8695ffe49e0dc618f15167f2dc4d7de69d8b60e6a0dd9fc2304da885ec9fa1f" id=52190e3e-394e-49a6-a21c-6a0e2a05c95e name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:25:13 no-preload-190877 crio[773]: time="2026-01-10T02:25:13.027586005Z" level=info msg="Started container" PID=2877 containerID=b8695ffe49e0dc618f15167f2dc4d7de69d8b60e6a0dd9fc2304da885ec9fa1f description=default/busybox/busybox id=52190e3e-394e-49a6-a21c-6a0e2a05c95e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce52e69e58aa1579199fd4e5815dae2468c5c0d7711dba21ed984647b86dac7a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b8695ffe49e0d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   ce52e69e58aa1       busybox                                     default
	b05c2d26da993       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   68785a5ce8116       coredns-7d764666f9-xrkw6                    kube-system
	ae3f59785bfb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   9830fd590de9e       storage-provisioner                         kube-system
	18d5066529512       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   bd47870648967       kindnet-rz9kz                               kube-system
	38df52398ec0c       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      26 seconds ago      Running             kube-proxy                0                   b0b9de824806f       kube-proxy-hrztb                            kube-system
	4bc4800ceff6d       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      35 seconds ago      Running             kube-scheduler            0                   1b338a0e982ec       kube-scheduler-no-preload-190877            kube-system
	1d10ca979efae       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      35 seconds ago      Running             kube-controller-manager   0                   8cb43665de565       kube-controller-manager-no-preload-190877   kube-system
	80eec004273c8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   7baa6b0b38873       etcd-no-preload-190877                      kube-system
	24694cac178f4       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      35 seconds ago      Running             kube-apiserver            0                   95bf883448025       kube-apiserver-no-preload-190877            kube-system
	
	
	==> coredns [b05c2d26da993afb7f4aa777e522ba54b975b7e63a12c2faab445f31f54c2e03] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56298 - 59157 "HINFO IN 8955337914597550064.4040254739624988302. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081016062s
	
	
	==> describe nodes <==
	Name:               no-preload-190877
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-190877
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=no-preload-190877
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-190877
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:25:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:25:20 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:25:20 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:25:20 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:25:20 +0000   Sat, 10 Jan 2026 02:25:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-190877
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                d8ee769a-5dd7-45e1-8492-7abe20102f5b
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-xrkw6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-190877                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-rz9kz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-190877             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-190877    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-hrztb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-190877             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node no-preload-190877 event: Registered Node no-preload-190877 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [80eec004273c8cf1a4f10c7f34f4addcbc5c6527033c05c2c5dbac08a6ffe359] <==
	{"level":"info","ts":"2026-01-10T02:24:46.041216Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T02:24:46.041234Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:24:46.041253Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:46.041921Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:46.041955Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:24:46.041979Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:46.041989Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:46.042579Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:46.043147Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-190877 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:24:46.043150Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:46.043178Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:46.043384Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:46.043502Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:46.043569Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:46.043565Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:46.043665Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:46.043790Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:24:46.043945Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:24:46.044168Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:24:46.044637Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:24:46.048461Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:24:46.049443Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:25:03.679033Z","caller":"traceutil/trace.go:172","msg":"trace[1092289635] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"243.944929ms","start":"2026-01-10T02:25:03.435071Z","end":"2026-01-10T02:25:03.679016Z","steps":["trace[1092289635] 'process raft request'  (duration: 243.791299ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T02:25:03.802536Z","caller":"traceutil/trace.go:172","msg":"trace[2078678604] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"114.83042ms","start":"2026-01-10T02:25:03.687688Z","end":"2026-01-10T02:25:03.802519Z","steps":["trace[2078678604] 'process raft request'  (duration: 108.861779ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T02:25:04.575286Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.175301ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357892156691492 > lease_revoke:<id:59069ba5b86d1222>","response":"size:28"}
	
	
	==> kernel <==
	 02:25:21 up  1:07,  0 user,  load average: 3.78, 3.47, 2.27
	Linux no-preload-190877 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [18d5066529512b6130529664936929f5f6698ae8985a8c81d440d2e5615dd13d] <==
	I0110 02:24:57.220200       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:24:57.220504       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:24:57.220647       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:24:57.220673       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:24:57.220702       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:24:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:24:57.327064       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:24:57.420511       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:24:57.420552       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:24:57.420721       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:24:57.720905       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:24:57.720934       1 metrics.go:72] Registering metrics
	I0110 02:24:57.720992       1 controller.go:711] "Syncing nftables rules"
	I0110 02:25:07.327661       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:25:07.327745       1 main.go:301] handling current node
	I0110 02:25:17.326767       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:25:17.326817       1 main.go:301] handling current node
	
	
	==> kube-apiserver [24694cac178f4c42d3aadb4a6704e9b7890b234e175d92534b40220a4e24cb20] <==
	I0110 02:24:47.318668       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:24:47.334543       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:24:47.335010       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:24:47.339624       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:24:47.340041       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:24:47.416506       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:24:48.120317       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:24:48.124873       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:24:48.124909       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:24:48.803074       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:24:48.856980       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:24:48.925369       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:24:48.935915       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 02:24:48.937747       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:24:48.942499       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:24:49.150003       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:24:49.838498       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:24:49.853156       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:24:49.866377       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:24:54.801067       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:24:54.801067       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:24:54.849990       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:24:55.063414       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:24:55.070241       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0110 02:25:20.041127       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:55002: use of closed network connection
	
	
	==> kube-controller-manager [1d10ca979efaefe8fe1d936c80d48ff01a66ae5c28d2d8cd0a4377a614551a8d] <==
	I0110 02:24:53.975304       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.975985       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.976807       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.976878       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.977081       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.977363       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.980191       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.980194       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.981281       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.980218       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.980246       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.981401       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.981535       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.981631       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.980207       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.986185       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.986203       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.986214       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:24:53.986220       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:24:53.986659       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.986698       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.994605       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:53.994656       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:54.063862       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:08.964321       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [38df52398ec0cd2142fcc5edf0b55e0ef9deb488925ea6cfb2e509d8eb320c24] <==
	I0110 02:24:55.244213       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:24:55.321474       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:24:55.421980       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:55.422022       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:24:55.422134       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:24:55.444820       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:24:55.444925       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:24:55.451723       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:24:55.452806       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:24:55.452836       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:24:55.455477       1 config.go:200] "Starting service config controller"
	I0110 02:24:55.456136       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:24:55.455495       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:24:55.456193       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:24:55.455523       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:24:55.456205       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:24:55.455961       1 config.go:309] "Starting node config controller"
	I0110 02:24:55.456465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:24:55.456507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:24:55.556327       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:24:55.556331       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:24:55.556359       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4bc4800ceff6d1dd995808cfa3275244b6843da3193798da3fcac702727f4ea9] <==
	E0110 02:24:47.193047       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:24:47.193115       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:24:47.193174       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:24:47.193901       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:24:47.190559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:24:47.194037       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:24:47.195061       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:24:48.011413       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:24:48.012466       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:24:48.093065       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 02:24:48.136238       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:24:48.143306       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:24:48.149669       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:24:48.174479       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:24:48.174492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:24:48.188968       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:24:48.270168       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:24:48.281294       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:24:48.326326       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:24:48.387498       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:24:48.405831       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:24:48.455371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:24:48.491251       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:24:48.547441       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I0110 02:24:49.971174       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:24:54 no-preload-190877 kubelet[2196]: I0110 02:24:54.917921    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/83af6dd6-503a-46f4-9895-3ea6558e6206-cni-cfg\") pod \"kindnet-rz9kz\" (UID: \"83af6dd6-503a-46f4-9895-3ea6558e6206\") " pod="kube-system/kindnet-rz9kz"
	Jan 10 02:24:54 no-preload-190877 kubelet[2196]: I0110 02:24:54.917975    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83af6dd6-503a-46f4-9895-3ea6558e6206-lib-modules\") pod \"kindnet-rz9kz\" (UID: \"83af6dd6-503a-46f4-9895-3ea6558e6206\") " pod="kube-system/kindnet-rz9kz"
	Jan 10 02:24:54 no-preload-190877 kubelet[2196]: I0110 02:24:54.918082    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a23fad6-7698-43bf-ae75-8baf92c7f9a7-kube-proxy\") pod \"kube-proxy-hrztb\" (UID: \"4a23fad6-7698-43bf-ae75-8baf92c7f9a7\") " pod="kube-system/kube-proxy-hrztb"
	Jan 10 02:24:54 no-preload-190877 kubelet[2196]: I0110 02:24:54.918147    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a23fad6-7698-43bf-ae75-8baf92c7f9a7-xtables-lock\") pod \"kube-proxy-hrztb\" (UID: \"4a23fad6-7698-43bf-ae75-8baf92c7f9a7\") " pod="kube-system/kube-proxy-hrztb"
	Jan 10 02:24:54 no-preload-190877 kubelet[2196]: I0110 02:24:54.918192    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7fbd\" (UniqueName: \"kubernetes.io/projected/83af6dd6-503a-46f4-9895-3ea6558e6206-kube-api-access-v7fbd\") pod \"kindnet-rz9kz\" (UID: \"83af6dd6-503a-46f4-9895-3ea6558e6206\") " pod="kube-system/kindnet-rz9kz"
	Jan 10 02:24:55 no-preload-190877 kubelet[2196]: I0110 02:24:55.762341    2196 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-hrztb" podStartSLOduration=1.762319977 podStartE2EDuration="1.762319977s" podCreationTimestamp="2026-01-10 02:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:24:55.762288865 +0000 UTC m=+6.167242968" watchObservedRunningTime="2026-01-10 02:24:55.762319977 +0000 UTC m=+6.167274080"
	Jan 10 02:24:56 no-preload-190877 kubelet[2196]: E0110 02:24:56.580505    2196 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-190877" containerName="kube-apiserver"
	Jan 10 02:24:57 no-preload-190877 kubelet[2196]: I0110 02:24:57.771795    2196 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-rz9kz" podStartSLOduration=1.9907884340000002 podStartE2EDuration="3.771773456s" podCreationTimestamp="2026-01-10 02:24:54 +0000 UTC" firstStartedPulling="2026-01-10 02:24:55.135208901 +0000 UTC m=+5.540163004" lastFinishedPulling="2026-01-10 02:24:56.916193943 +0000 UTC m=+7.321148026" observedRunningTime="2026-01-10 02:24:57.771246762 +0000 UTC m=+8.176200866" watchObservedRunningTime="2026-01-10 02:24:57.771773456 +0000 UTC m=+8.176727562"
	Jan 10 02:25:00 no-preload-190877 kubelet[2196]: E0110 02:25:00.016217    2196 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-190877" containerName="kube-controller-manager"
	Jan 10 02:25:01 no-preload-190877 kubelet[2196]: E0110 02:25:01.043445    2196 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-190877" containerName="etcd"
	Jan 10 02:25:03 no-preload-190877 kubelet[2196]: E0110 02:25:03.428895    2196 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-190877" containerName="kube-scheduler"
	Jan 10 02:25:06 no-preload-190877 kubelet[2196]: E0110 02:25:06.586627    2196 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-190877" containerName="kube-apiserver"
	Jan 10 02:25:07 no-preload-190877 kubelet[2196]: I0110 02:25:07.487724    2196 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 02:25:07 no-preload-190877 kubelet[2196]: I0110 02:25:07.613967    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3d30685d-b6a9-4299-baf3-866bb7aef6b8-tmp\") pod \"storage-provisioner\" (UID: \"3d30685d-b6a9-4299-baf3-866bb7aef6b8\") " pod="kube-system/storage-provisioner"
	Jan 10 02:25:07 no-preload-190877 kubelet[2196]: I0110 02:25:07.614009    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4cf927b-a221-4397-a974-381370fe2757-config-volume\") pod \"coredns-7d764666f9-xrkw6\" (UID: \"f4cf927b-a221-4397-a974-381370fe2757\") " pod="kube-system/coredns-7d764666f9-xrkw6"
	Jan 10 02:25:07 no-preload-190877 kubelet[2196]: I0110 02:25:07.614037    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsnq6\" (UniqueName: \"kubernetes.io/projected/3d30685d-b6a9-4299-baf3-866bb7aef6b8-kube-api-access-tsnq6\") pod \"storage-provisioner\" (UID: \"3d30685d-b6a9-4299-baf3-866bb7aef6b8\") " pod="kube-system/storage-provisioner"
	Jan 10 02:25:07 no-preload-190877 kubelet[2196]: I0110 02:25:07.614054    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bsz\" (UniqueName: \"kubernetes.io/projected/f4cf927b-a221-4397-a974-381370fe2757-kube-api-access-k8bsz\") pod \"coredns-7d764666f9-xrkw6\" (UID: \"f4cf927b-a221-4397-a974-381370fe2757\") " pod="kube-system/coredns-7d764666f9-xrkw6"
	Jan 10 02:25:08 no-preload-190877 kubelet[2196]: E0110 02:25:08.781357    2196 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xrkw6" containerName="coredns"
	Jan 10 02:25:08 no-preload-190877 kubelet[2196]: I0110 02:25:08.794316    2196 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-xrkw6" podStartSLOduration=13.794298656 podStartE2EDuration="13.794298656s" podCreationTimestamp="2026-01-10 02:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:08.793949618 +0000 UTC m=+19.198903738" watchObservedRunningTime="2026-01-10 02:25:08.794298656 +0000 UTC m=+19.199252756"
	Jan 10 02:25:08 no-preload-190877 kubelet[2196]: I0110 02:25:08.817933    2196 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.817910068 podStartE2EDuration="13.817910068s" podCreationTimestamp="2026-01-10 02:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:08.806205522 +0000 UTC m=+19.211159625" watchObservedRunningTime="2026-01-10 02:25:08.817910068 +0000 UTC m=+19.222864151"
	Jan 10 02:25:09 no-preload-190877 kubelet[2196]: E0110 02:25:09.784657    2196 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xrkw6" containerName="coredns"
	Jan 10 02:25:10 no-preload-190877 kubelet[2196]: E0110 02:25:10.020850    2196 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-190877" containerName="kube-controller-manager"
	Jan 10 02:25:10 no-preload-190877 kubelet[2196]: E0110 02:25:10.786946    2196 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xrkw6" containerName="coredns"
	Jan 10 02:25:12 no-preload-190877 kubelet[2196]: I0110 02:25:12.038851    2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwc5q\" (UniqueName: \"kubernetes.io/projected/db660e0d-265f-4939-9a77-c311c0ded30d-kube-api-access-rwc5q\") pod \"busybox\" (UID: \"db660e0d-265f-4939-9a77-c311c0ded30d\") " pod="default/busybox"
	Jan 10 02:25:13 no-preload-190877 kubelet[2196]: I0110 02:25:13.805939    2196 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.0995616 podStartE2EDuration="2.805920822s" podCreationTimestamp="2026-01-10 02:25:11 +0000 UTC" firstStartedPulling="2026-01-10 02:25:12.287841404 +0000 UTC m=+22.692795503" lastFinishedPulling="2026-01-10 02:25:12.994200631 +0000 UTC m=+23.399154725" observedRunningTime="2026-01-10 02:25:13.805659772 +0000 UTC m=+24.210613875" watchObservedRunningTime="2026-01-10 02:25:13.805920822 +0000 UTC m=+24.210874925"
	
	
	==> storage-provisioner [ae3f59785bfb78e213bfcae57b06de1364368544a47c151891334c774152ec0c] <==
	I0110 02:25:07.887702       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:25:07.896754       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:25:07.896810       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:25:07.899086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:07.903750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:25:07.903974       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:25:07.904150       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-190877_479c4acc-e880-4cc3-930a-3b872157836d!
	I0110 02:25:07.904111       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d495bc3-96f4-4c63-bede-e941f6968552", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-190877_479c4acc-e880-4cc3-930a-3b872157836d became leader
	W0110 02:25:07.905909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:07.908853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:25:08.005197       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-190877_479c4acc-e880-4cc3-930a-3b872157836d!
	W0110 02:25:09.912157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:09.915873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:11.919802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:11.925499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:13.929288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:13.935613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:15.938921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:15.943094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:17.947471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:17.951596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:19.955331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:19.959433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-190877 -n no-preload-190877
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-190877 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.938242ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:25:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-313784 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-313784 describe deploy/metrics-server -n kube-system: exit status 1 (60.093198ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-313784 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-313784
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-313784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85",
	        "Created": "2026-01-10T02:25:05.094879814Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318020,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:25:05.129726299Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/hostname",
	        "HostsPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/hosts",
	        "LogPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85-json.log",
	        "Name": "/default-k8s-diff-port-313784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-313784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-313784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85",
	                "LowerDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-313784",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-313784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-313784",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-313784",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-313784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ec896971c955ff5ef766564a948a0b0db5b8773e4642f42c0db34bf8aea55043",
	            "SandboxKey": "/var/run/docker/netns/ec896971c955",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-313784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0894fcffb6ef151230a0e511493b85c03422956c47ed99558a627394939589f6",
	                    "EndpointID": "ee6c1d64113d543f7620ddb69a61a6e21092813511a36b653c18cb5705859090",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1e:42:25:cb:9c:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-313784",
	                        "40f734d8ee9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-313784 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-313784 logs -n 25: (1.259692904s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-647049 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ ssh     │ -p bridge-647049 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo containerd config dump                                                                                                                                                                                                  │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo crio config                                                                                                                                                                                                             │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ delete  │ -p bridge-647049                                                                                                                                                                                                                              │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:25 UTC │
	│ delete  │ -p disable-driver-mounts-249405                                                                                                                                                                                                               │ disable-driver-mounts-249405 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p old-k8s-version-188604 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p embed-certs-872415 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p no-preload-190877 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-188604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:25:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:25:40.566313  327170 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:25:40.566468  327170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:25:40.566477  327170 out.go:374] Setting ErrFile to fd 2...
	I0110 02:25:40.566483  327170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:25:40.566807  327170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:25:40.567452  327170 out.go:368] Setting JSON to false
	I0110 02:25:40.569292  327170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4090,"bootTime":1768007851,"procs":449,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:25:40.569374  327170 start.go:143] virtualization: kvm guest
	I0110 02:25:40.571151  327170 out.go:179] * [no-preload-190877] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:25:40.572456  327170 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:25:40.572459  327170 notify.go:221] Checking for updates...
	I0110 02:25:40.573439  327170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:25:40.574921  327170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:25:40.575970  327170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:25:40.576955  327170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:25:40.577982  327170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:25:40.579466  327170 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:40.580214  327170 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:25:40.615184  327170 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:25:40.615274  327170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:25:40.682022  327170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2026-01-10 02:25:40.670193145 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:25:40.682223  327170 docker.go:319] overlay module found
	I0110 02:25:40.683392  327170 out.go:179] * Using the docker driver based on existing profile
	I0110 02:25:40.685105  327170 start.go:309] selected driver: docker
	I0110 02:25:40.685121  327170 start.go:928] validating driver "docker" against &{Name:no-preload-190877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-190877 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:40.685242  327170 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:25:40.685880  327170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:25:40.759058  327170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2026-01-10 02:25:40.746443303 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:25:40.759412  327170 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:40.759459  327170 cni.go:84] Creating CNI manager for ""
	I0110 02:25:40.759523  327170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:40.759572  327170 start.go:353] cluster config:
	{Name:no-preload-190877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-190877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:40.760849  327170 out.go:179] * Starting "no-preload-190877" primary control-plane node in "no-preload-190877" cluster
	I0110 02:25:40.761816  327170 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:25:40.763364  327170 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:25:40.786998  324231 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.204032903s)
	I0110 02:25:40.787042  324231 api_server.go:72] duration metric: took 3.225167419s to wait for apiserver process to appear ...
	I0110 02:25:40.787050  324231 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:40.787072  324231 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:25:40.787003  324231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.933797314s)
	I0110 02:25:40.791201  324231 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-188604 addons enable metrics-server
	
	I0110 02:25:40.792370  324231 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0110 02:25:40.764704  327170 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:40.764848  327170 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/config.json ...
	I0110 02:25:40.764813  327170 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:25:40.765134  327170 cache.go:107] acquiring lock: {Name:mkd3743aa6dbeee70f5052141fd97fba1a4b776a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.765224  327170 cache.go:107] acquiring lock: {Name:mkfb1f32dc95eeae9eee6c5585bdb9b693b5559d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.765265  327170 cache.go:107] acquiring lock: {Name:mk11c3187868c59d7d1b9dd7b9a7aaa853f8fe59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.765313  327170 cache.go:107] acquiring lock: {Name:mkfa560b31d718e90a6fb60e7abe399641ae6719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.765360  327170 cache.go:107] acquiring lock: {Name:mk2896e6429111a42fbd2fbb76b3a4ee4f2d47fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.765399  327170 cache.go:115] /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0110 02:25:40.765528  327170 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 176.8µs
	I0110 02:25:40.765547  327170 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0110 02:25:40.765326  327170 cache.go:107] acquiring lock: {Name:mk9bbd7a889f85b8b60f3f7c40b9bdb978ffb21f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.765609  327170 cache.go:115] /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0110 02:25:40.765622  327170 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 297.908µs
	I0110 02:25:40.765637  327170 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0110 02:25:40.765297  327170 cache.go:115] /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0110 02:25:40.765656  327170 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 535.051µs
	I0110 02:25:40.765665  327170 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0110 02:25:40.765415  327170 cache.go:115] /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0110 02:25:40.765677  327170 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 422.046µs
	I0110 02:25:40.765688  327170 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0110 02:25:40.765313  327170 cache.go:115] /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0110 02:25:40.765698  327170 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 478.341µs
	I0110 02:25:40.765708  327170 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0110 02:25:40.765418  327170 cache.go:115] /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0110 02:25:40.765723  327170 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 364.904µs
	I0110 02:25:40.765733  327170 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0110 02:25:40.765137  327170 cache.go:107] acquiring lock: {Name:mk463c428f881ce7360a800c3b14632d065f2bf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.765429  327170 cache.go:107] acquiring lock: {Name:mk7f967abb48a89ff605bb6f1da122f70e3d41b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.765771  327170 cache.go:115] /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0110 02:25:40.765780  327170 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 659.578µs
	I0110 02:25:40.765788  327170 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0110 02:25:40.765803  327170 cache.go:115] /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I0110 02:25:40.765812  327170 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 384.127µs
	I0110 02:25:40.765827  327170 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22414-10552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0110 02:25:40.765835  327170 cache.go:87] Successfully saved all images to host disk.
	I0110 02:25:40.795013  327170 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:25:40.795033  327170 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:25:40.795053  327170 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:25:40.795090  327170 start.go:360] acquireMachinesLock for no-preload-190877: {Name:mkab3ffe699cfbfb7505e8d993dedfe70773e14b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:25:40.795138  327170 start.go:364] duration metric: took 31.304µs to acquireMachinesLock for "no-preload-190877"
	I0110 02:25:40.795155  327170 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:25:40.795161  327170 fix.go:54] fixHost starting: 
	I0110 02:25:40.795468  327170 cli_runner.go:164] Run: docker container inspect no-preload-190877 --format={{.State.Status}}
	I0110 02:25:40.819882  327170 fix.go:112] recreateIfNeeded on no-preload-190877: state=Stopped err=<nil>
	W0110 02:25:40.819979  327170 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:25:36.775790  325613 out.go:252] * Restarting existing docker container for "embed-certs-872415" ...
	I0110 02:25:36.775861  325613 cli_runner.go:164] Run: docker start embed-certs-872415
	I0110 02:25:37.058562  325613 cli_runner.go:164] Run: docker container inspect embed-certs-872415 --format={{.State.Status}}
	I0110 02:25:37.077701  325613 kic.go:430] container "embed-certs-872415" state is running.
	I0110 02:25:37.078162  325613 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-872415
	I0110 02:25:37.098423  325613 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415/config.json ...
	I0110 02:25:37.098681  325613 machine.go:94] provisionDockerMachine start ...
	I0110 02:25:37.098758  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:37.117226  325613 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:37.117575  325613 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I0110 02:25:37.117594  325613 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:25:37.118205  325613 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40312->127.0.0.1:33115: read: connection reset by peer
	I0110 02:25:40.322979  325613 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-872415
	
	I0110 02:25:40.323006  325613 ubuntu.go:182] provisioning hostname "embed-certs-872415"
	I0110 02:25:40.323054  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:40.351149  325613 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:40.351508  325613 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I0110 02:25:40.351528  325613 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-872415 && echo "embed-certs-872415" | sudo tee /etc/hostname
	I0110 02:25:40.510484  325613 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-872415
	
	I0110 02:25:40.510562  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:40.534208  325613 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:40.534474  325613 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I0110 02:25:40.534501  325613 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-872415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-872415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-872415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:25:40.683561  325613 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:25:40.683590  325613 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:25:40.683629  325613 ubuntu.go:190] setting up certificates
	I0110 02:25:40.683649  325613 provision.go:84] configureAuth start
	I0110 02:25:40.683710  325613 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-872415
	I0110 02:25:40.707295  325613 provision.go:143] copyHostCerts
	I0110 02:25:40.707359  325613 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:25:40.707370  325613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:25:40.707450  325613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:25:40.707580  325613 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:25:40.707589  325613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:25:40.707633  325613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:25:40.707730  325613 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:25:40.707738  325613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:25:40.707981  325613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:25:40.708100  325613 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.embed-certs-872415 san=[127.0.0.1 192.168.103.2 embed-certs-872415 localhost minikube]
	I0110 02:25:40.795685  325613 provision.go:177] copyRemoteCerts
	I0110 02:25:40.795750  325613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:25:40.795803  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:40.819413  325613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:25:40.920384  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:25:40.944112  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0110 02:25:40.969377  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:25:40.989127  325613 provision.go:87] duration metric: took 305.453665ms to configureAuth
	I0110 02:25:40.989158  325613 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:25:40.989375  325613 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:40.989524  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:41.008427  325613 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:41.008629  325613 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I0110 02:25:41.008644  325613 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:25:41.355033  325613 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:25:41.355058  325613 machine.go:97] duration metric: took 4.256352336s to provisionDockerMachine
	I0110 02:25:41.355073  325613 start.go:293] postStartSetup for "embed-certs-872415" (driver="docker")
	I0110 02:25:41.355086  325613 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:25:41.355167  325613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:25:41.355222  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:41.378673  325613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:25:41.477067  325613 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:25:41.480542  325613 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:25:41.480575  325613 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:25:41.480588  325613 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:25:41.480640  325613 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:25:41.480714  325613 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:25:41.480818  325613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:25:41.488074  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:41.504631  325613 start.go:296] duration metric: took 149.54698ms for postStartSetup
	I0110 02:25:41.504703  325613 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:25:41.504742  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:41.523599  325613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:25:41.613364  325613 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:25:41.618079  325613 fix.go:56] duration metric: took 4.865658447s for fixHost
	I0110 02:25:41.618106  325613 start.go:83] releasing machines lock for "embed-certs-872415", held for 4.865704005s
	I0110 02:25:41.618176  325613 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-872415
	I0110 02:25:41.638100  325613 ssh_runner.go:195] Run: cat /version.json
	I0110 02:25:41.638147  325613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:25:41.638151  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:41.638256  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:41.660849  325613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:25:41.662292  325613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:25:41.751912  325613 ssh_runner.go:195] Run: systemctl --version
	I0110 02:25:41.806341  325613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:25:41.840686  325613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:25:41.845903  325613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:25:41.845967  325613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:25:41.854069  325613 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:25:41.854087  325613 start.go:496] detecting cgroup driver to use...
	I0110 02:25:41.854111  325613 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:25:41.854148  325613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:25:41.867857  325613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:25:41.880293  325613 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:25:41.880347  325613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:25:41.894061  325613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:25:41.906351  325613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:25:41.988396  325613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:25:42.069272  325613 docker.go:234] disabling docker service ...
	I0110 02:25:42.069339  325613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:25:42.082945  325613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:25:42.094681  325613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:25:42.171947  325613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:25:42.253483  325613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:25:42.266110  325613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:25:42.279727  325613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:25:42.279779  325613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:42.288159  325613 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:25:42.288212  325613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:42.296205  325613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:42.304244  325613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:42.312292  325613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:25:42.319846  325613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:42.328020  325613 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:42.335624  325613 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:42.344668  325613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:25:42.351406  325613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:25:42.358170  325613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:42.437990  325613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:25:42.570546  325613 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:25:42.570616  325613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:25:42.574464  325613 start.go:574] Will wait 60s for crictl version
	I0110 02:25:42.574510  325613 ssh_runner.go:195] Run: which crictl
	I0110 02:25:42.577987  325613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:25:42.601211  325613 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:25:42.601269  325613 ssh_runner.go:195] Run: crio --version
	I0110 02:25:42.626682  325613 ssh_runner.go:195] Run: crio --version
	I0110 02:25:42.654173  325613 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:25:42.655160  325613 cli_runner.go:164] Run: docker network inspect embed-certs-872415 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:25:42.672533  325613 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0110 02:25:42.676490  325613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:25:42.686415  325613 kubeadm.go:884] updating cluster {Name:embed-certs-872415 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-872415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:25:42.686513  325613 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:42.686551  325613 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:25:42.719868  325613 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:25:42.719909  325613 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:25:42.719965  325613 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:25:42.744751  325613 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:25:42.744771  325613 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:25:42.744777  325613 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I0110 02:25:42.744862  325613 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-872415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-872415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:25:42.744949  325613 ssh_runner.go:195] Run: crio config
	I0110 02:25:42.787819  325613 cni.go:84] Creating CNI manager for ""
	I0110 02:25:42.787840  325613 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:42.787857  325613 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:25:42.787882  325613 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-872415 NodeName:embed-certs-872415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:25:42.788056  325613 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-872415"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:25:42.788123  325613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:25:42.796354  325613 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:25:42.796426  325613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:25:42.803872  325613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I0110 02:25:42.816180  325613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:25:42.828128  325613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0110 02:25:42.840716  325613 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:25:42.844464  325613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:25:42.853858  325613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:42.932290  325613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:25:42.961490  325613 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415 for IP: 192.168.103.2
	I0110 02:25:42.961547  325613 certs.go:195] generating shared ca certs ...
	I0110 02:25:42.961567  325613 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:42.961739  325613 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:25:42.961800  325613 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:25:42.961814  325613 certs.go:257] generating profile certs ...
	I0110 02:25:42.961934  325613 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415/client.key
	I0110 02:25:42.962016  325613 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415/apiserver.key.1ac86d55
	I0110 02:25:42.962073  325613 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415/proxy-client.key
	I0110 02:25:42.962213  325613 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:25:42.962251  325613 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:25:42.962266  325613 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:25:42.962303  325613 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:25:42.962337  325613 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:25:42.962373  325613 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:25:42.962458  325613 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:42.963169  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:25:42.982745  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:25:43.001202  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:25:43.020141  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:25:43.042976  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0110 02:25:43.061411  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:25:43.078354  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:25:43.094555  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/embed-certs-872415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:25:43.111227  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:25:43.127741  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:25:43.144311  325613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:25:43.162223  325613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:25:43.174822  325613 ssh_runner.go:195] Run: openssl version
	I0110 02:25:43.180536  325613 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:25:43.187603  325613 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:25:43.194314  325613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:25:43.197831  325613 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:25:43.197868  325613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:25:43.232359  325613 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:25:43.239991  325613 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:43.247020  325613 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:25:43.253977  325613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:43.257478  325613 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:43.257521  325613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:43.290806  325613 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:25:43.297702  325613 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:25:43.304451  325613 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:25:43.311296  325613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:25:43.315050  325613 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:25:43.315097  325613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:25:43.357565  325613 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:25:43.366383  325613 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:25:43.370587  325613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:25:43.414878  325613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:25:43.460299  325613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:25:43.507948  325613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:25:43.558870  325613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:25:43.609857  325613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:25:43.669008  325613 kubeadm.go:401] StartCluster: {Name:embed-certs-872415 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-872415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:43.669111  325613 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:25:43.669167  325613 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:25:43.704167  325613 cri.go:96] found id: "c18d2a75e5522089147efcb8d2db17a6b9de91293257643b461a741df482d227"
	I0110 02:25:43.704187  325613 cri.go:96] found id: "a7aca2ea4ec4ec1d630947a0d365ab68519caa4f3c40d6e6853070fc4a4c003e"
	I0110 02:25:43.704194  325613 cri.go:96] found id: "3d674808892c3ae2356254e36c341b16b81993833f3dc3beac43dcafda7c7a22"
	I0110 02:25:43.704200  325613 cri.go:96] found id: "d1431bb51cdc7fa296b7eb50a379de29c5de265de5eb52ac0f23e940f0dd5766"
	I0110 02:25:43.704204  325613 cri.go:96] found id: ""
	I0110 02:25:43.704249  325613 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:25:43.718069  325613 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:25:43Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:25:43.718146  325613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:25:43.726544  325613 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:25:43.726562  325613 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:25:43.726605  325613 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:25:43.733932  325613 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:25:43.734748  325613 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-872415" does not appear in /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:25:43.735249  325613 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-10552/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-872415" cluster setting kubeconfig missing "embed-certs-872415" context setting]
	I0110 02:25:43.736061  325613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:43.738025  325613 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:25:43.745243  325613 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I0110 02:25:43.745271  325613 kubeadm.go:602] duration metric: took 18.703694ms to restartPrimaryControlPlane
	I0110 02:25:43.745285  325613 kubeadm.go:403] duration metric: took 76.287154ms to StartCluster
	I0110 02:25:43.745300  325613 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:43.745371  325613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:25:43.747183  325613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:43.747450  325613 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:25:43.747558  325613 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:25:43.747650  325613 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:43.747666  325613 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-872415"
	I0110 02:25:43.747687  325613 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-872415"
	I0110 02:25:43.747699  325613 addons.go:70] Setting default-storageclass=true in profile "embed-certs-872415"
	I0110 02:25:43.747700  325613 addons.go:70] Setting dashboard=true in profile "embed-certs-872415"
	I0110 02:25:43.747721  325613 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-872415"
	I0110 02:25:43.747729  325613 addons.go:239] Setting addon dashboard=true in "embed-certs-872415"
	W0110 02:25:43.747740  325613 addons.go:248] addon dashboard should already be in state true
	I0110 02:25:43.747776  325613 host.go:66] Checking if "embed-certs-872415" exists ...
	W0110 02:25:43.747701  325613 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:25:43.747833  325613 host.go:66] Checking if "embed-certs-872415" exists ...
	I0110 02:25:43.748041  325613 cli_runner.go:164] Run: docker container inspect embed-certs-872415 --format={{.State.Status}}
	I0110 02:25:43.748309  325613 cli_runner.go:164] Run: docker container inspect embed-certs-872415 --format={{.State.Status}}
	I0110 02:25:43.748310  325613 cli_runner.go:164] Run: docker container inspect embed-certs-872415 --format={{.State.Status}}
	I0110 02:25:43.750494  325613 out.go:179] * Verifying Kubernetes components...
	I0110 02:25:43.751809  325613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:43.772922  325613 addons.go:239] Setting addon default-storageclass=true in "embed-certs-872415"
	W0110 02:25:43.772946  325613 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:25:43.772971  325613 host.go:66] Checking if "embed-certs-872415" exists ...
	I0110 02:25:43.773419  325613 cli_runner.go:164] Run: docker container inspect embed-certs-872415 --format={{.State.Status}}
	I0110 02:25:43.773919  325613 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:25:43.773918  325613 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:25:43.775530  325613 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:25:43.775548  325613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:25:43.775558  325613 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:25:40.793281  324231 addons.go:530] duration metric: took 3.231356857s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0110 02:25:40.793410  324231 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:25:40.794697  324231 api_server.go:141] control plane version: v1.28.0
	I0110 02:25:40.794716  324231 api_server.go:131] duration metric: took 7.659504ms to wait for apiserver health ...
	I0110 02:25:40.794724  324231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:25:40.798267  324231 system_pods.go:59] 8 kube-system pods found
	I0110 02:25:40.798313  324231 system_pods.go:61] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:40.798333  324231 system_pods.go:61] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:25:40.798350  324231 system_pods.go:61] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:25:40.798358  324231 system_pods.go:61] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:25:40.798376  324231 system_pods.go:61] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:40.798391  324231 system_pods.go:61] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:25:40.798403  324231 system_pods.go:61] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:40.798412  324231 system_pods.go:61] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:40.798425  324231 system_pods.go:74] duration metric: took 3.694242ms to wait for pod list to return data ...
	I0110 02:25:40.798446  324231 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:25:40.800290  324231 default_sa.go:45] found service account: "default"
	I0110 02:25:40.800311  324231 default_sa.go:55] duration metric: took 1.854847ms for default service account to be created ...
	I0110 02:25:40.800320  324231 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:25:40.805176  324231 system_pods.go:86] 8 kube-system pods found
	I0110 02:25:40.805265  324231 system_pods.go:89] "coredns-5dd5756b68-vc68c" [c1dc1059-c986-4d7a-80ab-b983545f5602] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:25:40.805281  324231 system_pods.go:89] "etcd-old-k8s-version-188604" [8f894562-69d7-4bdf-98d6-46b86196772b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:25:40.805291  324231 system_pods.go:89] "kindnet-25dkr" [0d70b272-4962-4030-b190-a69657eab2cd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:25:40.805301  324231 system_pods.go:89] "kube-apiserver-old-k8s-version-188604" [95d93261-4d6d-494e-a443-b35249c869b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:25:40.805309  324231 system_pods.go:89] "kube-controller-manager-old-k8s-version-188604" [a8362606-c43a-4982-9a5f-f36d4a497496] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:25:40.805349  324231 system_pods.go:89] "kube-proxy-c445q" [afdd3e61-ba2d-499d-a5bb-6ec541371d71] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:25:40.805362  324231 system_pods.go:89] "kube-scheduler-old-k8s-version-188604" [7a1f7c6e-3cb8-487a-a75a-b8138b8da248] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:25:40.805371  324231 system_pods.go:89] "storage-provisioner" [ef938075-c2da-49a3-a955-89f2a00bacf7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:25:40.805383  324231 system_pods.go:126] duration metric: took 5.056502ms to wait for k8s-apps to be running ...
	I0110 02:25:40.805396  324231 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:25:40.805470  324231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:25:40.823912  324231 system_svc.go:56] duration metric: took 18.510136ms WaitForService to wait for kubelet
	I0110 02:25:40.823941  324231 kubeadm.go:587] duration metric: took 3.262066571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:25:40.823961  324231 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:25:40.827022  324231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:25:40.827044  324231 node_conditions.go:123] node cpu capacity is 8
	I0110 02:25:40.827057  324231 node_conditions.go:105] duration metric: took 3.090843ms to run NodePressure ...
	I0110 02:25:40.827068  324231 start.go:242] waiting for startup goroutines ...
	I0110 02:25:40.827075  324231 start.go:247] waiting for cluster config update ...
	I0110 02:25:40.827084  324231 start.go:256] writing updated cluster config ...
	I0110 02:25:40.827290  324231 ssh_runner.go:195] Run: rm -f paused
	I0110 02:25:40.831323  324231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:25:40.836213  324231 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:25:42.842026  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:25:44.842077  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	I0110 02:25:40.821291  327170 out.go:252] * Restarting existing docker container for "no-preload-190877" ...
	I0110 02:25:40.821356  327170 cli_runner.go:164] Run: docker start no-preload-190877
	I0110 02:25:41.079791  327170 cli_runner.go:164] Run: docker container inspect no-preload-190877 --format={{.State.Status}}
	I0110 02:25:41.098670  327170 kic.go:430] container "no-preload-190877" state is running.
	I0110 02:25:41.099052  327170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-190877
	I0110 02:25:41.117820  327170 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/config.json ...
	I0110 02:25:41.118066  327170 machine.go:94] provisionDockerMachine start ...
	I0110 02:25:41.118127  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:41.136328  327170 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:41.136590  327170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I0110 02:25:41.136605  327170 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:25:41.137307  327170 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56968->127.0.0.1:33120: read: connection reset by peer
	I0110 02:25:44.267732  327170 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-190877
	
	I0110 02:25:44.267757  327170 ubuntu.go:182] provisioning hostname "no-preload-190877"
	I0110 02:25:44.267809  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:44.285501  327170 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:44.285715  327170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I0110 02:25:44.285727  327170 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-190877 && echo "no-preload-190877" | sudo tee /etc/hostname
	I0110 02:25:44.430952  327170 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-190877
	
	I0110 02:25:44.431055  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:44.453041  327170 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:44.453338  327170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I0110 02:25:44.453361  327170 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-190877' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-190877/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-190877' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:25:44.590784  327170 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:25:44.590816  327170 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:25:44.590872  327170 ubuntu.go:190] setting up certificates
	I0110 02:25:44.590905  327170 provision.go:84] configureAuth start
	I0110 02:25:44.590979  327170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-190877
	I0110 02:25:44.611440  327170 provision.go:143] copyHostCerts
	I0110 02:25:44.611501  327170 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:25:44.611524  327170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:25:44.611613  327170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:25:44.611778  327170 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:25:44.611792  327170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:25:44.611838  327170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:25:44.611985  327170 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:25:44.612005  327170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:25:44.612054  327170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:25:44.612168  327170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.no-preload-190877 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-190877]
	I0110 02:25:44.729179  327170 provision.go:177] copyRemoteCerts
	I0110 02:25:44.729233  327170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:25:44.729289  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:44.749185  327170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/no-preload-190877/id_rsa Username:docker}
	I0110 02:25:44.853113  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:25:44.873052  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:25:44.889771  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:25:44.906820  327170 provision.go:87] duration metric: took 315.888878ms to configureAuth
	I0110 02:25:44.906849  327170 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:25:44.907034  327170 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:44.907164  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:44.925981  327170 main.go:144] libmachine: Using SSH client type: native
	I0110 02:25:44.926278  327170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I0110 02:25:44.926313  327170 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:25:45.296100  327170 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:25:45.296157  327170 machine.go:97] duration metric: took 4.17807407s to provisionDockerMachine
	I0110 02:25:45.296177  327170 start.go:293] postStartSetup for "no-preload-190877" (driver="docker")
	I0110 02:25:45.296192  327170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:25:45.296281  327170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:25:45.296329  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:45.325785  327170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/no-preload-190877/id_rsa Username:docker}
	I0110 02:25:45.428065  327170 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:25:45.432157  327170 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:25:45.432191  327170 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:25:45.432204  327170 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:25:45.432269  327170 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:25:45.432369  327170 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:25:45.432489  327170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:25:45.441399  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:45.459326  327170 start.go:296] duration metric: took 163.134076ms for postStartSetup
	I0110 02:25:45.459406  327170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:25:45.459454  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:45.482918  327170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/no-preload-190877/id_rsa Username:docker}
	I0110 02:25:43.775607  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:43.776631  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:25:43.776646  325613 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:25:43.776694  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:43.805428  325613 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:25:43.805453  325613 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:25:43.805553  325613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:25:43.805842  325613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:25:43.806729  325613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:25:43.830690  325613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:25:43.886369  325613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:25:43.898879  325613 node_ready.go:35] waiting up to 6m0s for node "embed-certs-872415" to be "Ready" ...
	I0110 02:25:43.913536  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:25:43.913557  325613 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:25:43.914476  325613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:25:43.928055  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:25:43.928076  325613 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:25:43.932191  325613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:25:43.943227  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:25:43.943250  325613 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:25:43.957345  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:25:43.957367  325613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:25:43.972316  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:25:43.972339  325613 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:25:43.986536  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:25:43.986558  325613 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:25:43.998775  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:25:43.998792  325613 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:25:44.010732  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:25:44.010747  325613 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:25:44.022592  325613 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:25:44.022611  325613 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:25:44.034416  325613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:25:45.270141  325613 node_ready.go:49] node "embed-certs-872415" is "Ready"
	I0110 02:25:45.270169  325613 node_ready.go:38] duration metric: took 1.371247368s for node "embed-certs-872415" to be "Ready" ...
	I0110 02:25:45.270184  325613 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:25:45.270232  325613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:25:45.815850  325613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.901338408s)
	I0110 02:25:45.815964  325613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.88375061s)
	I0110 02:25:45.816103  325613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.781643862s)
	I0110 02:25:45.816151  325613 api_server.go:72] duration metric: took 2.068668291s to wait for apiserver process to appear ...
	I0110 02:25:45.816187  325613 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:25:45.816210  325613 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:25:45.819970  325613 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-872415 addons enable metrics-server
	
	I0110 02:25:45.822719  325613 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:25:45.822758  325613 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:25:45.828564  325613 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:25:45.829509  325613 addons.go:530] duration metric: took 2.08195654s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:25:46.317045  325613 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 02:25:46.322780  325613 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:25:46.322810  325613 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:25:45.581716  327170 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:25:45.587559  327170 fix.go:56] duration metric: took 4.792393593s for fixHost
	I0110 02:25:45.587588  327170 start.go:83] releasing machines lock for "no-preload-190877", held for 4.792440028s
	I0110 02:25:45.587655  327170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-190877
	I0110 02:25:45.609504  327170 ssh_runner.go:195] Run: cat /version.json
	I0110 02:25:45.609570  327170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:25:45.609642  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:45.609578  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:25:45.632618  327170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/no-preload-190877/id_rsa Username:docker}
	I0110 02:25:45.633023  327170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/no-preload-190877/id_rsa Username:docker}
	I0110 02:25:45.784922  327170 ssh_runner.go:195] Run: systemctl --version
	I0110 02:25:45.791675  327170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:25:45.827979  327170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:25:45.832793  327170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:25:45.832868  327170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:25:45.841380  327170 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:25:45.841399  327170 start.go:496] detecting cgroup driver to use...
	I0110 02:25:45.841424  327170 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:25:45.841459  327170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:25:45.856405  327170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:25:45.867805  327170 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:25:45.867850  327170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:25:45.881079  327170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:25:45.892496  327170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:25:45.977261  327170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:25:46.075294  327170 docker.go:234] disabling docker service ...
	I0110 02:25:46.075351  327170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:25:46.091763  327170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:25:46.105442  327170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:25:46.196647  327170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:25:46.296991  327170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:25:46.311536  327170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:25:46.327978  327170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:25:46.328036  327170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:46.338462  327170 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:25:46.338547  327170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:46.350192  327170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:46.362754  327170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:46.374200  327170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:25:46.385872  327170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:46.398243  327170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:46.410050  327170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:25:46.422065  327170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:25:46.431481  327170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:25:46.439331  327170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:46.520086  327170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:25:46.659370  327170 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:25:46.659434  327170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:25:46.663478  327170 start.go:574] Will wait 60s for crictl version
	I0110 02:25:46.663541  327170 ssh_runner.go:195] Run: which crictl
	I0110 02:25:46.667248  327170 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:25:46.695241  327170 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:25:46.695329  327170 ssh_runner.go:195] Run: crio --version
	I0110 02:25:46.724693  327170 ssh_runner.go:195] Run: crio --version
	I0110 02:25:46.753323  327170 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:25:46.754417  327170 cli_runner.go:164] Run: docker network inspect no-preload-190877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:25:46.774515  327170 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:25:46.778666  327170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:25:46.789748  327170 kubeadm.go:884] updating cluster {Name:no-preload-190877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-190877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:25:46.789852  327170 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:25:46.789895  327170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:25:46.825365  327170 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:25:46.825385  327170 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:25:46.825392  327170 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:25:46.825500  327170 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-190877 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-190877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:25:46.825582  327170 ssh_runner.go:195] Run: crio config
	I0110 02:25:46.874643  327170 cni.go:84] Creating CNI manager for ""
	I0110 02:25:46.874664  327170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:25:46.874679  327170 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:25:46.874701  327170 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-190877 NodeName:no-preload-190877 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:25:46.874815  327170 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-190877"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:25:46.874865  327170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:25:46.883232  327170 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:25:46.883300  327170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:25:46.891294  327170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:25:46.904403  327170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:25:46.918771  327170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0110 02:25:46.931335  327170 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:25:46.935078  327170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:25:46.945678  327170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:47.028927  327170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:25:47.053706  327170 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877 for IP: 192.168.76.2
	I0110 02:25:47.053729  327170 certs.go:195] generating shared ca certs ...
	I0110 02:25:47.053748  327170 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:47.053945  327170 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:25:47.054015  327170 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:25:47.054030  327170 certs.go:257] generating profile certs ...
	I0110 02:25:47.054133  327170 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/client.key
	I0110 02:25:47.054212  327170 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/apiserver.key.f68302f3
	I0110 02:25:47.054273  327170 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/proxy-client.key
	I0110 02:25:47.054417  327170 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:25:47.054462  327170 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:25:47.054474  327170 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:25:47.054515  327170 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:25:47.054556  327170 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:25:47.054643  327170 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:25:47.054717  327170 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:25:47.055535  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:25:47.077659  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:25:47.099820  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:25:47.123000  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:25:47.148988  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:25:47.176706  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:25:47.194783  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:25:47.212773  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/no-preload-190877/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:25:47.231786  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:25:47.251295  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:25:47.270608  327170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:25:47.292106  327170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:25:47.307965  327170 ssh_runner.go:195] Run: openssl version
	I0110 02:25:47.316567  327170 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:25:47.325783  327170 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:25:47.335325  327170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:25:47.340442  327170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:25:47.340484  327170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:25:47.385646  327170 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:25:47.393330  327170 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:47.400333  327170 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:25:47.407355  327170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:47.411050  327170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:47.411098  327170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:25:47.449307  327170 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:25:47.457080  327170 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:25:47.464651  327170 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:25:47.472563  327170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:25:47.476160  327170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:25:47.476208  327170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:25:47.518948  327170 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:25:47.527627  327170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:25:47.531985  327170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:25:47.571054  327170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:25:47.617618  327170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:25:47.670615  327170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:25:47.730509  327170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:25:47.784311  327170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:25:47.828404  327170 kubeadm.go:401] StartCluster: {Name:no-preload-190877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-190877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:25:47.828541  327170 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:25:47.828624  327170 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:25:47.862759  327170 cri.go:96] found id: "6ad7afd00fb45f713bc2a231314f18f547e221ac07c9582f185c8dff172c458a"
	I0110 02:25:47.862825  327170 cri.go:96] found id: "f9119f08da7d53c43f8344b07645c1ff5515e403a8b6a95b251708f15accb6e0"
	I0110 02:25:47.862837  327170 cri.go:96] found id: "577d187f5e859dca2b5e47fdbe503d26687fcc21697de51827c8e09a3554993c"
	I0110 02:25:47.862842  327170 cri.go:96] found id: "f9fd815214df519a24be93449738661909fed82f445d131a65a8612d71e272f5"
	I0110 02:25:47.862847  327170 cri.go:96] found id: ""
	I0110 02:25:47.862921  327170 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:25:47.877666  327170 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:25:47Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:25:47.877755  327170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:25:47.886897  327170 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:25:47.886917  327170 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:25:47.886962  327170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:25:47.896470  327170 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:25:47.897820  327170 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-190877" does not appear in /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:25:47.898798  327170 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-10552/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-190877" cluster setting kubeconfig missing "no-preload-190877" context setting]
	I0110 02:25:47.900269  327170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:47.902902  327170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:25:47.913961  327170 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:25:47.913992  327170 kubeadm.go:602] duration metric: took 27.068095ms to restartPrimaryControlPlane
	I0110 02:25:47.914004  327170 kubeadm.go:403] duration metric: took 85.613625ms to StartCluster
	I0110 02:25:47.914021  327170 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:47.914075  327170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:25:47.916249  327170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:25:47.916528  327170 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:25:47.916592  327170 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:25:47.916721  327170 addons.go:70] Setting storage-provisioner=true in profile "no-preload-190877"
	I0110 02:25:47.916738  327170 addons.go:239] Setting addon storage-provisioner=true in "no-preload-190877"
	W0110 02:25:47.916746  327170 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:25:47.916756  327170 addons.go:70] Setting dashboard=true in profile "no-preload-190877"
	I0110 02:25:47.916783  327170 addons.go:239] Setting addon dashboard=true in "no-preload-190877"
	W0110 02:25:47.916792  327170 addons.go:248] addon dashboard should already be in state true
	I0110 02:25:47.916813  327170 host.go:66] Checking if "no-preload-190877" exists ...
	I0110 02:25:47.916825  327170 host.go:66] Checking if "no-preload-190877" exists ...
	I0110 02:25:47.917023  327170 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:25:47.916836  327170 addons.go:70] Setting default-storageclass=true in profile "no-preload-190877"
	I0110 02:25:47.917185  327170 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-190877"
	I0110 02:25:47.917596  327170 cli_runner.go:164] Run: docker container inspect no-preload-190877 --format={{.State.Status}}
	I0110 02:25:47.917650  327170 cli_runner.go:164] Run: docker container inspect no-preload-190877 --format={{.State.Status}}
	I0110 02:25:47.917842  327170 cli_runner.go:164] Run: docker container inspect no-preload-190877 --format={{.State.Status}}
	I0110 02:25:47.918738  327170 out.go:179] * Verifying Kubernetes components...
	I0110 02:25:47.919774  327170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:25:47.946374  327170 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:25:47.946387  327170 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:25:47.947550  327170 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:25:47.947569  327170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:25:47.947609  327170 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Jan 10 02:25:36 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:36.981372674Z" level=info msg="Starting container: 33672e6fb9c14eec4a05b041e4efb6545f3e534c3a6fd356315beefff2959d0e" id=f0012cdf-5df1-4a43-ab4a-eedec05a8614 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:25:36 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:36.983326755Z" level=info msg="Started container" PID=1904 containerID=33672e6fb9c14eec4a05b041e4efb6545f3e534c3a6fd356315beefff2959d0e description=kube-system/coredns-7d764666f9-rhgg5/coredns id=f0012cdf-5df1-4a43-ab4a-eedec05a8614 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df8e563711549a4222bde302e0251d2cc7a588c092fec0ba30f84bf3a99d7728
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.474173929Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6b3d74ae-950f-4c91-9b4f-052f46856bf0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.47427286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.481331078Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b34a9ce9a4fee0f6d71e13c692b75c834294175fe3d8894857ca3065b228ec9d UID:2381c602-8214-4872-a765-3ac283fb99a2 NetNS:/var/run/netns/7452d474-5a01-429b-b6b5-a62358904bab Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002f4668}] Aliases:map[]}"
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.481370357Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.504544767Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b34a9ce9a4fee0f6d71e13c692b75c834294175fe3d8894857ca3065b228ec9d UID:2381c602-8214-4872-a765-3ac283fb99a2 NetNS:/var/run/netns/7452d474-5a01-429b-b6b5-a62358904bab Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002f4668}] Aliases:map[]}"
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.504683047Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.505484027Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.506300214Z" level=info msg="Ran pod sandbox b34a9ce9a4fee0f6d71e13c692b75c834294175fe3d8894857ca3065b228ec9d with infra container: default/busybox/POD" id=6b3d74ae-950f-4c91-9b4f-052f46856bf0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.507575469Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab549a00-01c2-4ce4-9498-ba465aa8e56d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.507686816Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ab549a00-01c2-4ce4-9498-ba465aa8e56d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.507743656Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ab549a00-01c2-4ce4-9498-ba465aa8e56d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.508484564Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0aa56eab-1c49-4da9-873e-6b0a222e4fb2 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:25:39 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:39.508816168Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.318120198Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0aa56eab-1c49-4da9-873e-6b0a222e4fb2 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.319908619Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dbd5330f-d44b-4f8c-b5a1-4c925d74e56f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.322043268Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=099181d2-5703-4e75-8920-bd740990f82d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.325379804Z" level=info msg="Creating container: default/busybox/busybox" id=5b7ba1cd-6411-4253-a03a-e223257e1e8e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.325538402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.334150372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.334768198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.369416969Z" level=info msg="Created container 15ffa6741e2dd817be3b36266ed6f81960bdfb00aafa18638b165b8f18d7cc7c: default/busybox/busybox" id=5b7ba1cd-6411-4253-a03a-e223257e1e8e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.370211147Z" level=info msg="Starting container: 15ffa6741e2dd817be3b36266ed6f81960bdfb00aafa18638b165b8f18d7cc7c" id=952f0b7f-cd45-41ba-9f72-ca7929e99eda name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:25:40 default-k8s-diff-port-313784 crio[780]: time="2026-01-10T02:25:40.372544431Z" level=info msg="Started container" PID=1987 containerID=15ffa6741e2dd817be3b36266ed6f81960bdfb00aafa18638b165b8f18d7cc7c description=default/busybox/busybox id=952f0b7f-cd45-41ba-9f72-ca7929e99eda name=/runtime.v1.RuntimeService/StartContainer sandboxID=b34a9ce9a4fee0f6d71e13c692b75c834294175fe3d8894857ca3065b228ec9d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	15ffa6741e2dd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   b34a9ce9a4fee       busybox                                                default
	33672e6fb9c14       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   df8e563711549       coredns-7d764666f9-rhgg5                               kube-system
	864f12b1a9cae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   57ea11db4195b       storage-provisioner                                    kube-system
	0a116f421e9da       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   944a60d457d0a       kindnet-wbscw                                          kube-system
	188f709856003       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      24 seconds ago      Running             kube-proxy                0                   0b857933d4079       kube-proxy-6dcdf                                       kube-system
	e1c51326a96ec       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      34 seconds ago      Running             kube-apiserver            0                   321d88355ea18       kube-apiserver-default-k8s-diff-port-313784            kube-system
	953c1e88c817b       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      34 seconds ago      Running             kube-controller-manager   0                   3068948ef067e       kube-controller-manager-default-k8s-diff-port-313784   kube-system
	6ddaf36889007       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      34 seconds ago      Running             kube-scheduler            0                   062997cef28d3       kube-scheduler-default-k8s-diff-port-313784            kube-system
	f3fa733135684       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      34 seconds ago      Running             etcd                      0                   264b35ea546bc       etcd-default-k8s-diff-port-313784                      kube-system
	
	
	==> coredns [33672e6fb9c14eec4a05b041e4efb6545f3e534c3a6fd356315beefff2959d0e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53351 - 5865 "HINFO IN 8406812792153796200.9071472302791815634. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087653986s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-313784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-313784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=default-k8s-diff-port-313784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:25:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-313784
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:25:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:25:48 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:25:48 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:25:48 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:25:48 +0000   Sat, 10 Jan 2026 02:25:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-313784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                eaef45ee-c0a4-4074-89b2-25c5e6ae4f6a
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-rhgg5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-313784                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-wbscw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-313784             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-313784    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-6dcdf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-313784             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node default-k8s-diff-port-313784 event: Registered Node default-k8s-diff-port-313784 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [f3fa7331356844b2ba4f73e0ddfb3978cbdbc5afafa8f127277c260bc638e5ed] <==
	{"level":"info","ts":"2026-01-10T02:25:14.547485Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:25:14.938036Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:25:14.938110Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:25:14.938190Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2026-01-10T02:25:14.938216Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:14.938238Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:14.938926Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:14.938958Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:14.938980Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:14.938987Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:14.939534Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:25:14.940207Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:default-k8s-diff-port-313784 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:25:14.940209Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:14.940238Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:14.940399Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:14.940433Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:14.940513Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:25:14.940622Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:25:14.940660Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:25:14.940703Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:25:14.940803Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:25:14.941474Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:14.941572Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:14.944797Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2026-01-10T02:25:14.944812Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:25:48 up  1:08,  0 user,  load average: 4.21, 3.57, 2.34
	Linux default-k8s-diff-port-313784 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a116f421e9dac3fd52693a8602ebde04ba88b44a1cdcc6a71708e505aa3b726] <==
	I0110 02:25:26.154324       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:25:26.154617       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 02:25:26.154783       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:25:26.154812       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:25:26.154839       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:25:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:25:26.356162       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:25:26.356237       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:25:26.356762       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:25:26.451204       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:25:26.951924       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:25:26.951960       1 metrics.go:72] Registering metrics
	I0110 02:25:26.952048       1 controller.go:711] "Syncing nftables rules"
	I0110 02:25:36.355984       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:25:36.356066       1 main.go:301] handling current node
	I0110 02:25:46.359987       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:25:46.360027       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e1c51326a96ec6b0d39c99704de9b68c008fe375514647736051834aa23a6a16] <==
	E0110 02:25:15.975752       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 02:25:16.023627       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:25:16.029418       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:25:16.029778       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:25:16.038007       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:25:16.038197       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:25:16.126135       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:25:16.828210       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:25:16.832184       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:25:16.832195       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:25:17.293357       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:25:17.329170       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:25:17.431726       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:25:17.437409       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0110 02:25:17.438637       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:25:17.442585       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:25:17.859124       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:25:18.274931       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:25:18.283605       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:25:18.293409       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:25:23.464178       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:25:23.468513       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:25:23.661708       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:25:23.859346       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0110 02:25:47.255852       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:56906: use of closed network connection
	
	
	==> kube-controller-manager [953c1e88c817bdc67ae689a5a910229829734f80fd8b10e3c1761ac756521168] <==
	I0110 02:25:22.664965       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665091       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665158       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665289       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665341       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665587       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665755       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665782       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665819       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665859       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665946       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.665962       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.666126       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.666158       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.666182       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.666227       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.666278       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.667369       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:22.674158       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.674350       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-313784" podCIDRs=["10.244.0.0/24"]
	I0110 02:25:22.766357       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:22.766383       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:25:22.766388       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:25:22.767466       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:37.666494       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [188f7098560039691697b7ca00efac5537cdc41331a4766b0c204fc2abea6db3] <==
	I0110 02:25:24.269658       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:25:24.347450       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:24.448609       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:24.448639       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0110 02:25:24.448730       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:25:24.470823       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:25:24.470922       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:25:24.476240       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:25:24.477134       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:25:24.477169       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:24.478660       1 config.go:200] "Starting service config controller"
	I0110 02:25:24.478695       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:25:24.478759       1 config.go:309] "Starting node config controller"
	I0110 02:25:24.478768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:25:24.478776       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:25:24.478862       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:25:24.478868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:25:24.478906       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:25:24.478913       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:25:24.578995       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:25:24.579041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:25:24.579128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6ddaf36889007bb518312badc113d48821c14f53f7ed5cc9ff4807d509572e7c] <==
	E0110 02:25:15.894161       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:25:15.894265       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:25:15.894911       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:25:15.896675       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:25:15.896721       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:25:15.896825       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:25:15.896761       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:25:15.896998       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:25:15.897018       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:25:15.897055       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:25:15.897156       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:25:15.897323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:25:15.897355       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:25:15.897421       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:25:16.867106       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:25:16.867106       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:25:16.875163       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:25:16.936997       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:25:16.954689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:25:17.005395       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:25:17.009287       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:25:17.039440       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:25:17.100097       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:25:17.199589       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I0110 02:25:20.185953       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:25:23 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:23.955362    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82tct\" (UniqueName: \"kubernetes.io/projected/4ad21b3c-b663-4ee1-b481-19655b22e160-kube-api-access-82tct\") pod \"kindnet-wbscw\" (UID: \"4ad21b3c-b663-4ee1-b481-19655b22e160\") " pod="kube-system/kindnet-wbscw"
	Jan 10 02:25:23 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:23.955392    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2cb4683-cef0-4b78-9044-a209d81b5ee3-xtables-lock\") pod \"kube-proxy-6dcdf\" (UID: \"e2cb4683-cef0-4b78-9044-a209d81b5ee3\") " pod="kube-system/kube-proxy-6dcdf"
	Jan 10 02:25:23 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:23.955412    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccdt\" (UniqueName: \"kubernetes.io/projected/e2cb4683-cef0-4b78-9044-a209d81b5ee3-kube-api-access-mccdt\") pod \"kube-proxy-6dcdf\" (UID: \"e2cb4683-cef0-4b78-9044-a209d81b5ee3\") " pod="kube-system/kube-proxy-6dcdf"
	Jan 10 02:25:23 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:23.955436    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ad21b3c-b663-4ee1-b481-19655b22e160-lib-modules\") pod \"kindnet-wbscw\" (UID: \"4ad21b3c-b663-4ee1-b481-19655b22e160\") " pod="kube-system/kindnet-wbscw"
	Jan 10 02:25:23 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:23.955501    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4ad21b3c-b663-4ee1-b481-19655b22e160-cni-cfg\") pod \"kindnet-wbscw\" (UID: \"4ad21b3c-b663-4ee1-b481-19655b22e160\") " pod="kube-system/kindnet-wbscw"
	Jan 10 02:25:24 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:24.842741    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-313784" containerName="etcd"
	Jan 10 02:25:25 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:25.170298    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-6dcdf" podStartSLOduration=2.170276771 podStartE2EDuration="2.170276771s" podCreationTimestamp="2026-01-10 02:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:25.170190091 +0000 UTC m=+7.135358084" watchObservedRunningTime="2026-01-10 02:25:25.170276771 +0000 UTC m=+7.135444767"
	Jan 10 02:25:26 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:26.083857    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-313784" containerName="kube-apiserver"
	Jan 10 02:25:26 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:26.172836    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-wbscw" podStartSLOduration=1.39522827 podStartE2EDuration="3.172815374s" podCreationTimestamp="2026-01-10 02:25:23 +0000 UTC" firstStartedPulling="2026-01-10 02:25:24.18878975 +0000 UTC m=+6.153957734" lastFinishedPulling="2026-01-10 02:25:25.966376843 +0000 UTC m=+7.931544838" observedRunningTime="2026-01-10 02:25:26.17268423 +0000 UTC m=+8.137852238" watchObservedRunningTime="2026-01-10 02:25:26.172815374 +0000 UTC m=+8.137983384"
	Jan 10 02:25:32 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:32.170868    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-313784" containerName="kube-scheduler"
	Jan 10 02:25:33 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:33.195940    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-313784" containerName="kube-controller-manager"
	Jan 10 02:25:34 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:34.844335    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-313784" containerName="etcd"
	Jan 10 02:25:36 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:36.090925    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-313784" containerName="kube-apiserver"
	Jan 10 02:25:36 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:36.564660    1298 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 02:25:36 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:36.648730    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjbnx\" (UniqueName: \"kubernetes.io/projected/7b2c9aeb-37f2-4c60-ac35-a17f643dba15-kube-api-access-hjbnx\") pod \"coredns-7d764666f9-rhgg5\" (UID: \"7b2c9aeb-37f2-4c60-ac35-a17f643dba15\") " pod="kube-system/coredns-7d764666f9-rhgg5"
	Jan 10 02:25:36 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:36.648799    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5576c2bc-6ca6-49ef-98e4-27f810e200c1-tmp\") pod \"storage-provisioner\" (UID: \"5576c2bc-6ca6-49ef-98e4-27f810e200c1\") " pod="kube-system/storage-provisioner"
	Jan 10 02:25:36 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:36.648848    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b2c9aeb-37f2-4c60-ac35-a17f643dba15-config-volume\") pod \"coredns-7d764666f9-rhgg5\" (UID: \"7b2c9aeb-37f2-4c60-ac35-a17f643dba15\") " pod="kube-system/coredns-7d764666f9-rhgg5"
	Jan 10 02:25:36 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:36.648898    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr5kl\" (UniqueName: \"kubernetes.io/projected/5576c2bc-6ca6-49ef-98e4-27f810e200c1-kube-api-access-rr5kl\") pod \"storage-provisioner\" (UID: \"5576c2bc-6ca6-49ef-98e4-27f810e200c1\") " pod="kube-system/storage-provisioner"
	Jan 10 02:25:37 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:37.191176    1298 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-rhgg5" containerName="coredns"
	Jan 10 02:25:37 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:37.220365    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-rhgg5" podStartSLOduration=14.220343357 podStartE2EDuration="14.220343357s" podCreationTimestamp="2026-01-10 02:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:37.208118183 +0000 UTC m=+19.173286176" watchObservedRunningTime="2026-01-10 02:25:37.220343357 +0000 UTC m=+19.185511351"
	Jan 10 02:25:37 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:37.233011    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.232987621 podStartE2EDuration="13.232987621s" podCreationTimestamp="2026-01-10 02:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:25:37.220491752 +0000 UTC m=+19.185659744" watchObservedRunningTime="2026-01-10 02:25:37.232987621 +0000 UTC m=+19.198155614"
	Jan 10 02:25:38 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:38.198731    1298 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-rhgg5" containerName="coredns"
	Jan 10 02:25:39 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:39.200632    1298 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-rhgg5" containerName="coredns"
	Jan 10 02:25:39 default-k8s-diff-port-313784 kubelet[1298]: I0110 02:25:39.267121    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lg6d\" (UniqueName: \"kubernetes.io/projected/2381c602-8214-4872-a765-3ac283fb99a2-kube-api-access-5lg6d\") pod \"busybox\" (UID: \"2381c602-8214-4872-a765-3ac283fb99a2\") " pod="default/busybox"
	Jan 10 02:25:47 default-k8s-diff-port-313784 kubelet[1298]: E0110 02:25:47.255585    1298 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35748->127.0.0.1:37011: write tcp 127.0.0.1:35748->127.0.0.1:37011: write: broken pipe
	
	
	==> storage-provisioner [864f12b1a9cae09b25031be9f42a359cf2b75cf5195559c383d95e37bff64bc5] <==
	I0110 02:25:36.978947       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:25:36.989147       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:25:36.989202       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:25:36.992452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:36.997858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:25:36.998037       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:25:36.998166       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313784_9203b310-ddd8-4a11-8df2-c8ac5d5ed61e!
	I0110 02:25:36.998346       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"156faf5c-a028-4a8b-8a5c-5f90c9b1d50d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-313784_9203b310-ddd8-4a11-8df2-c8ac5d5ed61e became leader
	W0110 02:25:37.000584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:37.004352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:25:37.099039       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313784_9203b310-ddd8-4a11-8df2-c8ac5d5ed61e!
	W0110 02:25:39.008790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:39.014739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:41.017815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:41.021791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:43.024616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:43.029611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:45.032213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:45.038012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:47.042219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:47.046451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:49.053036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:25:49.058183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-313784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-188604 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-188604 --alsologtostderr -v=1: exit status 80 (1.658468032s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-188604 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:26:29.991354  335907 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:29.991605  335907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:29.991615  335907 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:29.991620  335907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:29.991799  335907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:29.992036  335907 out.go:368] Setting JSON to false
	I0110 02:26:29.992055  335907 mustload.go:66] Loading cluster: old-k8s-version-188604
	I0110 02:26:29.992365  335907 config.go:182] Loaded profile config "old-k8s-version-188604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:26:29.992716  335907 cli_runner.go:164] Run: docker container inspect old-k8s-version-188604 --format={{.State.Status}}
	I0110 02:26:30.010652  335907 host.go:66] Checking if "old-k8s-version-188604" exists ...
	I0110 02:26:30.010930  335907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:30.063245  335907 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 02:26:30.053690944 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:30.063979  335907 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-188604 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:26:30.066043  335907 out.go:179] * Pausing node old-k8s-version-188604 ... 
	I0110 02:26:30.067105  335907 host.go:66] Checking if "old-k8s-version-188604" exists ...
	I0110 02:26:30.067334  335907 ssh_runner.go:195] Run: systemctl --version
	I0110 02:26:30.067382  335907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-188604
	I0110 02:26:30.084982  335907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/old-k8s-version-188604/id_rsa Username:docker}
	I0110 02:26:30.177592  335907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:30.189445  335907 pause.go:52] kubelet running: true
	I0110 02:26:30.189515  335907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:30.353132  335907 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:30.353211  335907 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:30.430741  335907 cri.go:96] found id: "3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e"
	I0110 02:26:30.430760  335907 cri.go:96] found id: "3eef71caacb4290f3264c8c7c1487a2a3a32057cebca2adde7fb5c9b5446e232"
	I0110 02:26:30.430765  335907 cri.go:96] found id: "832e7427e079dec6ed5e1274fdf2e96dc09e2cf11be39eb0ad4d7eb590ba7cb0"
	I0110 02:26:30.430768  335907 cri.go:96] found id: "2994793acd647dbf48fd7155eab6331a96f311accd8a9212a55f571d61b00119"
	I0110 02:26:30.430771  335907 cri.go:96] found id: "acf5b5647b7a260868d1a73059bea70c514cbda74b322acd6ecef0169e38684f"
	I0110 02:26:30.430774  335907 cri.go:96] found id: "861ce74c9faf076868c60d47909834154c9a2f93ac74567527702fb1423497f3"
	I0110 02:26:30.430777  335907 cri.go:96] found id: "c7891e84c7b07fa814892b06d908aeaae4f1e237406a3b4e0c937ca6047439f5"
	I0110 02:26:30.430780  335907 cri.go:96] found id: "a022cc94e780e8ed928e70a6eda0944c970eecd9d4d3e3af71a2fd593d685500"
	I0110 02:26:30.430784  335907 cri.go:96] found id: "583aa40ef23feaf98f416116520b822f7fed26e3509ae9a5afe569be8de6ceff"
	I0110 02:26:30.430793  335907 cri.go:96] found id: "7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50"
	I0110 02:26:30.430797  335907 cri.go:96] found id: "63fa889860ac4c551c147aa893d7114cdb4799c26166f95c40f2c08d1a1f8641"
	I0110 02:26:30.430801  335907 cri.go:96] found id: ""
	I0110 02:26:30.430846  335907 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:30.444645  335907 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:30Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:26:30.697141  335907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:30.709650  335907 pause.go:52] kubelet running: false
	I0110 02:26:30.709697  335907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:30.847111  335907 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:30.847199  335907 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:30.915965  335907 cri.go:96] found id: "3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e"
	I0110 02:26:30.915983  335907 cri.go:96] found id: "3eef71caacb4290f3264c8c7c1487a2a3a32057cebca2adde7fb5c9b5446e232"
	I0110 02:26:30.915988  335907 cri.go:96] found id: "832e7427e079dec6ed5e1274fdf2e96dc09e2cf11be39eb0ad4d7eb590ba7cb0"
	I0110 02:26:30.915992  335907 cri.go:96] found id: "2994793acd647dbf48fd7155eab6331a96f311accd8a9212a55f571d61b00119"
	I0110 02:26:30.915996  335907 cri.go:96] found id: "acf5b5647b7a260868d1a73059bea70c514cbda74b322acd6ecef0169e38684f"
	I0110 02:26:30.916001  335907 cri.go:96] found id: "861ce74c9faf076868c60d47909834154c9a2f93ac74567527702fb1423497f3"
	I0110 02:26:30.916006  335907 cri.go:96] found id: "c7891e84c7b07fa814892b06d908aeaae4f1e237406a3b4e0c937ca6047439f5"
	I0110 02:26:30.916010  335907 cri.go:96] found id: "a022cc94e780e8ed928e70a6eda0944c970eecd9d4d3e3af71a2fd593d685500"
	I0110 02:26:30.916014  335907 cri.go:96] found id: "583aa40ef23feaf98f416116520b822f7fed26e3509ae9a5afe569be8de6ceff"
	I0110 02:26:30.916022  335907 cri.go:96] found id: "7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50"
	I0110 02:26:30.916027  335907 cri.go:96] found id: "63fa889860ac4c551c147aa893d7114cdb4799c26166f95c40f2c08d1a1f8641"
	I0110 02:26:30.916031  335907 cri.go:96] found id: ""
	I0110 02:26:30.916073  335907 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:31.323934  335907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:31.350593  335907 pause.go:52] kubelet running: false
	I0110 02:26:31.350655  335907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:31.503058  335907 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:31.503114  335907 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:31.571845  335907 cri.go:96] found id: "3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e"
	I0110 02:26:31.571868  335907 cri.go:96] found id: "3eef71caacb4290f3264c8c7c1487a2a3a32057cebca2adde7fb5c9b5446e232"
	I0110 02:26:31.571872  335907 cri.go:96] found id: "832e7427e079dec6ed5e1274fdf2e96dc09e2cf11be39eb0ad4d7eb590ba7cb0"
	I0110 02:26:31.571875  335907 cri.go:96] found id: "2994793acd647dbf48fd7155eab6331a96f311accd8a9212a55f571d61b00119"
	I0110 02:26:31.571878  335907 cri.go:96] found id: "acf5b5647b7a260868d1a73059bea70c514cbda74b322acd6ecef0169e38684f"
	I0110 02:26:31.571882  335907 cri.go:96] found id: "861ce74c9faf076868c60d47909834154c9a2f93ac74567527702fb1423497f3"
	I0110 02:26:31.571899  335907 cri.go:96] found id: "c7891e84c7b07fa814892b06d908aeaae4f1e237406a3b4e0c937ca6047439f5"
	I0110 02:26:31.571916  335907 cri.go:96] found id: "a022cc94e780e8ed928e70a6eda0944c970eecd9d4d3e3af71a2fd593d685500"
	I0110 02:26:31.571924  335907 cri.go:96] found id: "583aa40ef23feaf98f416116520b822f7fed26e3509ae9a5afe569be8de6ceff"
	I0110 02:26:31.571932  335907 cri.go:96] found id: "7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50"
	I0110 02:26:31.571939  335907 cri.go:96] found id: "63fa889860ac4c551c147aa893d7114cdb4799c26166f95c40f2c08d1a1f8641"
	I0110 02:26:31.571944  335907 cri.go:96] found id: ""
	I0110 02:26:31.571987  335907 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:31.586758  335907 out.go:203] 
	W0110 02:26:31.587815  335907 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:26:31.587829  335907 out.go:285] * 
	* 
	W0110 02:26:31.589506  335907 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:26:31.590564  335907 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-188604 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-188604
helpers_test.go:244: (dbg) docker inspect old-k8s-version-188604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339",
	        "Created": "2026-01-10T02:24:20.28221194Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324437,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:25:30.690182836Z",
	            "FinishedAt": "2026-01-10T02:25:29.798989359Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/hostname",
	        "HostsPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/hosts",
	        "LogPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339-json.log",
	        "Name": "/old-k8s-version-188604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-188604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-188604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339",
	                "LowerDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-188604",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-188604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-188604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-188604",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-188604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9e7d9d480244b5134f253387bfe30b1a48ee2c5c4e005be71a4f524b8b78a2e9",
	            "SandboxKey": "/var/run/docker/netns/9e7d9d480244",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-188604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5bb0788a00cda98d7846d64fc1fe66eb98afdb7fd381de926d036feac84ba741",
	                    "EndpointID": "acdb1d80437cbc9317afbab691f1dd756c766bcd8506b5b7ddecbe0fc7fe8778",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1e:f1:cb:af:42:3b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-188604",
	                        "d326f6fd278c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188604 -n old-k8s-version-188604
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188604 -n old-k8s-version-188604: exit status 2 (321.66664ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-188604 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-188604 logs -n 25: (1.052464935s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-647049 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo crio config                                                                                                                                                                                                             │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ delete  │ -p bridge-647049                                                                                                                                                                                                                              │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:25 UTC │
	│ delete  │ -p disable-driver-mounts-249405                                                                                                                                                                                                               │ disable-driver-mounts-249405 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p old-k8s-version-188604 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p embed-certs-872415 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p no-preload-190877 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-188604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-313784 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:07.831058  333054 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:07.831142  333054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:07.831149  333054 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:07.831154  333054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:07.831353  333054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:07.831767  333054 out.go:368] Setting JSON to false
	I0110 02:26:07.833073  333054 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4117,"bootTime":1768007851,"procs":487,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:07.833123  333054 start.go:143] virtualization: kvm guest
	I0110 02:26:07.834821  333054 out.go:179] * [default-k8s-diff-port-313784] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:07.835923  333054 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:07.835916  333054 notify.go:221] Checking for updates...
	I0110 02:26:07.837984  333054 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:07.839482  333054 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:07.840535  333054 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:07.841623  333054 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:07.842615  333054 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:07.843939  333054 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:07.844481  333054 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:07.870643  333054 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:07.870733  333054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:07.924092  333054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 02:26:07.913649955 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:07.924182  333054 docker.go:319] overlay module found
	I0110 02:26:07.925941  333054 out.go:179] * Using the docker driver based on existing profile
	I0110 02:26:07.927132  333054 start.go:309] selected driver: docker
	I0110 02:26:07.927147  333054 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:07.927245  333054 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:07.927765  333054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:07.984519  333054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 02:26:07.973516651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:07.984910  333054 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:26:07.984954  333054 cni.go:84] Creating CNI manager for ""
	I0110 02:26:07.985023  333054 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:07.985069  333054 start.go:353] cluster config:
	{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:07.987403  333054 out.go:179] * Starting "default-k8s-diff-port-313784" primary control-plane node in "default-k8s-diff-port-313784" cluster
	I0110 02:26:07.988483  333054 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:07.989508  333054 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:07.990567  333054 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:07.990609  333054 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:07.990625  333054 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:07.990663  333054 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:07.990716  333054 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:07.990730  333054 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:07.990843  333054 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:26:08.010250  333054 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:08.010267  333054 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:08.010283  333054 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:08.010310  333054 start.go:360] acquireMachinesLock for default-k8s-diff-port-313784: {Name:mk0116f4190c69f6825824fe0766dd2c4c328e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:08.010368  333054 start.go:364] duration metric: took 34.56µs to acquireMachinesLock for "default-k8s-diff-port-313784"
	I0110 02:26:08.010391  333054 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:26:08.010398  333054 fix.go:54] fixHost starting: 
	I0110 02:26:08.010597  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:08.027131  333054 fix.go:112] recreateIfNeeded on default-k8s-diff-port-313784: state=Stopped err=<nil>
	W0110 02:26:08.027155  333054 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 02:26:07.341710  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:26:09.841691  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:26:06.854382  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:09.353495  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:07.863467  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	W0110 02:26:10.363618  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	I0110 02:26:08.028712  333054 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-313784" ...
	I0110 02:26:08.028768  333054 cli_runner.go:164] Run: docker start default-k8s-diff-port-313784
	I0110 02:26:08.282152  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:08.300901  333054 kic.go:430] container "default-k8s-diff-port-313784" state is running.
	I0110 02:26:08.301231  333054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:26:08.320668  333054 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:26:08.320867  333054 machine.go:94] provisionDockerMachine start ...
	I0110 02:26:08.320939  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:08.339117  333054 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:08.339402  333054 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I0110 02:26:08.339424  333054 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:26:08.340210  333054 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33762->127.0.0.1:33125: read: connection reset by peer
	I0110 02:26:11.469516  333054 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:26:11.469542  333054 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-313784"
	I0110 02:26:11.469598  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:11.487103  333054 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:11.487320  333054 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I0110 02:26:11.487334  333054 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313784 && echo "default-k8s-diff-port-313784" | sudo tee /etc/hostname
	I0110 02:26:11.623549  333054 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:26:11.623651  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:11.642386  333054 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:11.642658  333054 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I0110 02:26:11.642685  333054 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313784/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:26:11.768142  333054 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:26:11.768166  333054 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:26:11.768200  333054 ubuntu.go:190] setting up certificates
	I0110 02:26:11.768222  333054 provision.go:84] configureAuth start
	I0110 02:26:11.768283  333054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:26:11.785869  333054 provision.go:143] copyHostCerts
	I0110 02:26:11.785947  333054 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:26:11.785966  333054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:26:11.786052  333054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:26:11.786193  333054 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:26:11.786206  333054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:26:11.786244  333054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:26:11.786340  333054 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:26:11.786366  333054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:26:11.786407  333054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:26:11.786497  333054 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313784 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-313784 localhost minikube]
	I0110 02:26:11.887148  333054 provision.go:177] copyRemoteCerts
	I0110 02:26:11.887207  333054 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:26:11.887242  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:11.905478  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:11.999168  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:26:12.016727  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 02:26:12.033427  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:26:12.050142  333054 provision.go:87] duration metric: took 281.897325ms to configureAuth
	I0110 02:26:12.050172  333054 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:26:12.050373  333054 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:12.050516  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.068396  333054 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:12.068611  333054 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I0110 02:26:12.068628  333054 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:26:12.375106  333054 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:26:12.375134  333054 machine.go:97] duration metric: took 4.054253767s to provisionDockerMachine
	I0110 02:26:12.375150  333054 start.go:293] postStartSetup for "default-k8s-diff-port-313784" (driver="docker")
	I0110 02:26:12.375165  333054 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:26:12.375227  333054 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:26:12.375277  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.395289  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:12.488306  333054 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:26:12.491639  333054 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:26:12.491661  333054 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:26:12.491670  333054 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:26:12.491718  333054 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:26:12.491784  333054 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:26:12.491863  333054 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:26:12.499277  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:12.516144  333054 start.go:296] duration metric: took 140.981819ms for postStartSetup
	I0110 02:26:12.516201  333054 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:26:12.516256  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.533744  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:12.622553  333054 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:26:12.627052  333054 fix.go:56] duration metric: took 4.616648935s for fixHost
	I0110 02:26:12.627073  333054 start.go:83] releasing machines lock for "default-k8s-diff-port-313784", held for 4.616695447s
	I0110 02:26:12.627125  333054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:26:12.644754  333054 ssh_runner.go:195] Run: cat /version.json
	I0110 02:26:12.644804  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.644858  333054 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:26:12.644938  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.662804  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:12.663791  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:12.813601  333054 ssh_runner.go:195] Run: systemctl --version
	I0110 02:26:12.819979  333054 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:26:12.855500  333054 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:26:12.860375  333054 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:26:12.860433  333054 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:26:12.868544  333054 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:26:12.868568  333054 start.go:496] detecting cgroup driver to use...
	I0110 02:26:12.868593  333054 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:26:12.868631  333054 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:26:12.882972  333054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:26:12.894798  333054 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:26:12.894842  333054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:26:12.908207  333054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:26:12.919519  333054 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:26:12.999754  333054 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:26:13.080569  333054 docker.go:234] disabling docker service ...
	I0110 02:26:13.080624  333054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:26:13.094480  333054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:26:13.106467  333054 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:26:13.188001  333054 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:26:13.271255  333054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:26:13.283121  333054 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:26:13.297206  333054 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:26:13.297269  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.305573  333054 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:26:13.305622  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.313895  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.322382  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.330475  333054 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:26:13.338061  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.346980  333054 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.355263  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.364008  333054 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:26:13.370833  333054 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:26:13.377742  333054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:13.455369  333054 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:26:13.589378  333054 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:26:13.589435  333054 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:26:13.593369  333054 start.go:574] Will wait 60s for crictl version
	I0110 02:26:13.593438  333054 ssh_runner.go:195] Run: which crictl
	I0110 02:26:13.596768  333054 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:26:13.622538  333054 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:26:13.622620  333054 ssh_runner.go:195] Run: crio --version
	I0110 02:26:13.648450  333054 ssh_runner.go:195] Run: crio --version
	I0110 02:26:13.676132  333054 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:26:13.677171  333054 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:13.694435  333054 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0110 02:26:13.698314  333054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:13.708123  333054 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:26:13.708230  333054 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:13.708285  333054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:13.742117  333054 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:13.742140  333054 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:26:13.742191  333054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:13.766290  333054 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:13.766313  333054 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:26:13.766321  333054 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.35.0 crio true true} ...
	I0110 02:26:13.766406  333054 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-313784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:26:13.766469  333054 ssh_runner.go:195] Run: crio config
	I0110 02:26:13.809345  333054 cni.go:84] Creating CNI manager for ""
	I0110 02:26:13.809369  333054 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:13.809384  333054 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:26:13.809407  333054 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313784 NodeName:default-k8s-diff-port-313784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:26:13.809519  333054 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313784"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:26:13.809576  333054 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:26:13.817571  333054 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:26:13.817640  333054 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:26:13.824947  333054 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 02:26:13.836838  333054 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:26:13.848954  333054 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 02:26:13.862017  333054 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:26:13.865658  333054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:13.875387  333054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:13.956102  333054 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:26:13.981062  333054 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784 for IP: 192.168.94.2
	I0110 02:26:13.981083  333054 certs.go:195] generating shared ca certs ...
	I0110 02:26:13.981099  333054 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:13.981247  333054 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:26:13.981287  333054 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:26:13.981296  333054 certs.go:257] generating profile certs ...
	I0110 02:26:13.981392  333054 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.key
	I0110 02:26:13.981458  333054 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d
	I0110 02:26:13.981494  333054 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key
	I0110 02:26:13.981593  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:26:13.981630  333054 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:26:13.981641  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:26:13.981666  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:26:13.981691  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:26:13.981715  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:26:13.981754  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:13.982380  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:26:14.002823  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:26:14.022178  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:26:14.041321  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:26:14.062771  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 02:26:14.083376  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:26:14.101344  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:26:14.118232  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:26:14.134901  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:26:14.151403  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:26:14.167997  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:26:14.185731  333054 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:26:14.197712  333054 ssh_runner.go:195] Run: openssl version
	I0110 02:26:14.203568  333054 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:26:14.210454  333054 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:26:14.217293  333054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:26:14.220958  333054 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:26:14.221008  333054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:26:14.255604  333054 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:26:14.262599  333054 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:14.269458  333054 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:26:14.276589  333054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:14.279973  333054 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:14.280016  333054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:14.314627  333054 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:26:14.322662  333054 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:26:14.329919  333054 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:26:14.337288  333054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:26:14.341459  333054 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:26:14.341522  333054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:26:14.381801  333054 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:26:14.389574  333054 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:26:14.393174  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:26:14.428232  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:26:14.462933  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:26:14.504201  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:26:14.553987  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:26:14.603619  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:26:14.652310  333054 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:14.652403  333054 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:26:14.652478  333054 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:26:14.683632  333054 cri.go:96] found id: "35cfd8caca1ffb3ed069875a6f4df02737c571e205d4cb57ddce696a7018cd87"
	I0110 02:26:14.683657  333054 cri.go:96] found id: "fc29eda71f4bde30696f3da25f43c0e08c5a51d939a947924ad7303cd468a80f"
	I0110 02:26:14.683663  333054 cri.go:96] found id: "b5de7f05c48c095e9fef4efb74abefe8eb07be5b286dca9f1e02db1c8c79c371"
	I0110 02:26:14.683672  333054 cri.go:96] found id: "6f7b3a029a3bc4ba4e3633368af6270be9e6945d669d649d76e7070308610a5d"
	I0110 02:26:14.683677  333054 cri.go:96] found id: ""
	I0110 02:26:14.683722  333054 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:26:14.696074  333054 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:14Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:26:14.696137  333054 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:26:14.704953  333054 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:26:14.704972  333054 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:26:14.705015  333054 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:26:14.712419  333054 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:26:14.713669  333054 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-313784" does not appear in /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:14.714578  333054 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-10552/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-313784" cluster setting kubeconfig missing "default-k8s-diff-port-313784" context setting]
	I0110 02:26:14.715863  333054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:14.718193  333054 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:26:14.726039  333054 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I0110 02:26:14.726065  333054 kubeadm.go:602] duration metric: took 21.086368ms to restartPrimaryControlPlane
	I0110 02:26:14.726075  333054 kubeadm.go:403] duration metric: took 73.773963ms to StartCluster
	I0110 02:26:14.726090  333054 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:14.726146  333054 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:14.728022  333054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:14.728258  333054 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:14.728335  333054 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:26:14.728441  333054 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-313784"
	I0110 02:26:14.728461  333054 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-313784"
	W0110 02:26:14.728472  333054 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:26:14.728500  333054 host.go:66] Checking if "default-k8s-diff-port-313784" exists ...
	I0110 02:26:14.728515  333054 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:14.728508  333054 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-313784"
	I0110 02:26:14.728521  333054 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-313784"
	I0110 02:26:14.728542  333054 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-313784"
	W0110 02:26:14.728552  333054 addons.go:248] addon dashboard should already be in state true
	I0110 02:26:14.728555  333054 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-313784"
	I0110 02:26:14.728592  333054 host.go:66] Checking if "default-k8s-diff-port-313784" exists ...
	I0110 02:26:14.728874  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:14.728984  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:14.729045  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:14.732296  333054 out.go:179] * Verifying Kubernetes components...
	I0110 02:26:14.733473  333054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:14.754266  333054 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-313784"
	W0110 02:26:14.754286  333054 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:26:14.754310  333054 host.go:66] Checking if "default-k8s-diff-port-313784" exists ...
	I0110 02:26:14.754696  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:14.754760  333054 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:26:14.754821  333054 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:26:14.756144  333054 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:26:14.756164  333054 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:26:14.756219  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:14.756240  333054 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W0110 02:26:12.342036  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:26:14.342305  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:26:11.354082  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:13.853969  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:12.862878  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	W0110 02:26:14.863968  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	I0110 02:26:14.757517  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:26:14.757537  333054 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:26:14.757593  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:14.784678  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:14.789235  333054 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:26:14.789258  333054 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:26:14.789315  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:14.799372  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:14.817340  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:14.887998  333054 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:26:14.901669  333054 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:26:14.901822  333054 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-313784" to be "Ready" ...
	I0110 02:26:14.912232  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:26:14.912252  333054 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:26:14.926146  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:26:14.926179  333054 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:26:14.930180  333054 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:26:14.940513  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:26:14.940536  333054 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:26:14.955099  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:26:14.955220  333054 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:26:14.969871  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:26:14.969913  333054 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:26:14.984030  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:26:14.984048  333054 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:26:14.997651  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:26:14.997731  333054 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:26:15.009841  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:26:15.009865  333054 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:26:15.021979  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:26:15.021997  333054 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:26:15.033816  333054 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:26:16.424130  333054 node_ready.go:49] node "default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:16.424164  333054 node_ready.go:38] duration metric: took 1.522303458s for node "default-k8s-diff-port-313784" to be "Ready" ...
	I0110 02:26:16.424180  333054 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:26:16.424229  333054 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:26:16.940709  333054 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.039005236s)
	I0110 02:26:16.940756  333054 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.010552905s)
	I0110 02:26:16.940876  333054 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.907021204s)
	I0110 02:26:16.940923  333054 api_server.go:72] duration metric: took 2.212633663s to wait for apiserver process to appear ...
	I0110 02:26:16.940937  333054 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:26:16.940973  333054 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0110 02:26:16.944018  333054 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-313784 addons enable metrics-server
	
	I0110 02:26:16.945387  333054 api_server.go:325] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:26:16.945409  333054 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:26:16.947599  333054 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:26:16.948696  333054 addons.go:530] duration metric: took 2.220379713s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:26:17.441512  333054 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0110 02:26:17.447158  333054 api_server.go:325] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:26:17.447192  333054 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:26:16.342464  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	I0110 02:26:16.843631  324231 pod_ready.go:94] pod "coredns-5dd5756b68-vc68c" is "Ready"
	I0110 02:26:16.843664  324231 pod_ready.go:86] duration metric: took 36.0074275s for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.846562  324231 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.852708  324231 pod_ready.go:94] pod "etcd-old-k8s-version-188604" is "Ready"
	I0110 02:26:16.852729  324231 pod_ready.go:86] duration metric: took 6.144531ms for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.855561  324231 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.859755  324231 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-188604" is "Ready"
	I0110 02:26:16.859776  324231 pod_ready.go:86] duration metric: took 4.190621ms for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.862770  324231 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:17.039720  324231 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-188604" is "Ready"
	I0110 02:26:17.039743  324231 pod_ready.go:86] duration metric: took 176.948921ms for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:17.240978  324231 pod_ready.go:83] waiting for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:17.640874  324231 pod_ready.go:94] pod "kube-proxy-c445q" is "Ready"
	I0110 02:26:17.640917  324231 pod_ready.go:86] duration metric: took 399.91418ms for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:17.840732  324231 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:18.240702  324231 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-188604" is "Ready"
	I0110 02:26:18.240726  324231 pod_ready.go:86] duration metric: took 399.96876ms for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:18.240738  324231 pod_ready.go:40] duration metric: took 37.40938402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:18.285679  324231 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I0110 02:26:18.287038  324231 out.go:203] 
	W0110 02:26:18.288125  324231 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 02:26:18.289214  324231 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:26:18.290304  324231 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-188604" cluster and "default" namespace by default
	W0110 02:26:15.854580  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:18.355311  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:20.355617  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:16.864419  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	W0110 02:26:19.362850  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	W0110 02:26:21.363778  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	I0110 02:26:17.941955  333054 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0110 02:26:17.946088  333054 api_server.go:325] https://192.168.94.2:8444/healthz returned 200:
	ok
	I0110 02:26:17.947035  333054 api_server.go:141] control plane version: v1.35.0
	I0110 02:26:17.947057  333054 api_server.go:131] duration metric: took 1.006110565s to wait for apiserver health ...
	I0110 02:26:17.947069  333054 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:26:17.950502  333054 system_pods.go:59] 8 kube-system pods found
	I0110 02:26:17.950533  333054 system_pods.go:61] "coredns-7d764666f9-rhgg5" [7b2c9aeb-37f2-4c60-ac35-a17f643dba15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:26:17.950544  333054 system_pods.go:61] "etcd-default-k8s-diff-port-313784" [b49d0042-7385-49c7-ba65-5a452ae99050] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:26:17.950551  333054 system_pods.go:61] "kindnet-wbscw" [4ad21b3c-b663-4ee1-b481-19655b22e160] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:26:17.950561  333054 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313784" [f59fcb0e-e243-46f2-aa8e-beda31fa8454] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:26:17.950574  333054 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313784" [fb4fc971-af17-4755-ad89-b9926ae3f9fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:26:17.950599  333054 system_pods.go:61] "kube-proxy-6dcdf" [e2cb4683-cef0-4b78-9044-a209d81b5ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:26:17.950608  333054 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313784" [beee3e76-d418-4002-974d-39fd6cd498e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:26:17.950627  333054 system_pods.go:61] "storage-provisioner" [5576c2bc-6ca6-49ef-98e4-27f810e200c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:26:17.950637  333054 system_pods.go:74] duration metric: took 3.563113ms to wait for pod list to return data ...
	I0110 02:26:17.950646  333054 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:26:17.952901  333054 default_sa.go:45] found service account: "default"
	I0110 02:26:17.952920  333054 default_sa.go:55] duration metric: took 2.266196ms for default service account to be created ...
	I0110 02:26:17.952928  333054 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:26:17.955237  333054 system_pods.go:86] 8 kube-system pods found
	I0110 02:26:17.955264  333054 system_pods.go:89] "coredns-7d764666f9-rhgg5" [7b2c9aeb-37f2-4c60-ac35-a17f643dba15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:26:17.955275  333054 system_pods.go:89] "etcd-default-k8s-diff-port-313784" [b49d0042-7385-49c7-ba65-5a452ae99050] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:26:17.955284  333054 system_pods.go:89] "kindnet-wbscw" [4ad21b3c-b663-4ee1-b481-19655b22e160] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:26:17.955297  333054 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-313784" [f59fcb0e-e243-46f2-aa8e-beda31fa8454] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:26:17.955307  333054 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-313784" [fb4fc971-af17-4755-ad89-b9926ae3f9fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:26:17.955319  333054 system_pods.go:89] "kube-proxy-6dcdf" [e2cb4683-cef0-4b78-9044-a209d81b5ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:26:17.955360  333054 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-313784" [beee3e76-d418-4002-974d-39fd6cd498e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:26:17.955377  333054 system_pods.go:89] "storage-provisioner" [5576c2bc-6ca6-49ef-98e4-27f810e200c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:26:17.955389  333054 system_pods.go:126] duration metric: took 2.454636ms to wait for k8s-apps to be running ...
	I0110 02:26:17.955400  333054 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:26:17.955456  333054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:17.968328  333054 system_svc.go:56] duration metric: took 12.920893ms WaitForService to wait for kubelet
	I0110 02:26:17.968357  333054 kubeadm.go:587] duration metric: took 3.240069063s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:26:17.968378  333054 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:26:17.971097  333054 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:26:17.971123  333054 node_conditions.go:123] node cpu capacity is 8
	I0110 02:26:17.971140  333054 node_conditions.go:105] duration metric: took 2.756049ms to run NodePressure ...
	I0110 02:26:17.971152  333054 start.go:242] waiting for startup goroutines ...
	I0110 02:26:17.971166  333054 start.go:247] waiting for cluster config update ...
	I0110 02:26:17.971185  333054 start.go:256] writing updated cluster config ...
	I0110 02:26:17.971431  333054 ssh_runner.go:195] Run: rm -f paused
	I0110 02:26:17.975125  333054 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:17.978548  333054 pod_ready.go:83] waiting for pod "coredns-7d764666f9-rhgg5" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:26:19.983349  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:21.985141  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:23.364403  325613 pod_ready.go:94] pod "coredns-7d764666f9-lfdgm" is "Ready"
	I0110 02:26:23.364435  325613 pod_ready.go:86] duration metric: took 36.506559973s for pod "coredns-7d764666f9-lfdgm" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.367345  325613 pod_ready.go:83] waiting for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.371731  325613 pod_ready.go:94] pod "etcd-embed-certs-872415" is "Ready"
	I0110 02:26:23.371763  325613 pod_ready.go:86] duration metric: took 4.396045ms for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.374001  325613 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.378543  325613 pod_ready.go:94] pod "kube-apiserver-embed-certs-872415" is "Ready"
	I0110 02:26:23.378563  325613 pod_ready.go:86] duration metric: took 4.542133ms for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.380655  325613 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.562055  325613 pod_ready.go:94] pod "kube-controller-manager-embed-certs-872415" is "Ready"
	I0110 02:26:23.562084  325613 pod_ready.go:86] duration metric: took 181.404493ms for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.762713  325613 pod_ready.go:83] waiting for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:24.162842  325613 pod_ready.go:94] pod "kube-proxy-47n8d" is "Ready"
	I0110 02:26:24.162873  325613 pod_ready.go:86] duration metric: took 400.132834ms for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:24.364210  325613 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:24.762428  325613 pod_ready.go:94] pod "kube-scheduler-embed-certs-872415" is "Ready"
	I0110 02:26:24.762456  325613 pod_ready.go:86] duration metric: took 398.220633ms for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:24.762473  325613 pod_ready.go:40] duration metric: took 37.908107093s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:24.818061  325613 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:26:24.819835  325613 out.go:179] * Done! kubectl is now configured to use "embed-certs-872415" cluster and "default" namespace by default
	W0110 02:26:22.854633  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:24.855141  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:23.985202  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:26.483457  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:27.354070  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	I0110 02:26:28.853518  327170 pod_ready.go:94] pod "coredns-7d764666f9-xrkw6" is "Ready"
	I0110 02:26:28.853541  327170 pod_ready.go:86] duration metric: took 38.00485446s for pod "coredns-7d764666f9-xrkw6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.855776  327170 pod_ready.go:83] waiting for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.859059  327170 pod_ready.go:94] pod "etcd-no-preload-190877" is "Ready"
	I0110 02:26:28.859077  327170 pod_ready.go:86] duration metric: took 3.283782ms for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.860769  327170 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.863911  327170 pod_ready.go:94] pod "kube-apiserver-no-preload-190877" is "Ready"
	I0110 02:26:28.863928  327170 pod_ready.go:86] duration metric: took 3.138392ms for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.865531  327170 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:29.051921  327170 pod_ready.go:94] pod "kube-controller-manager-no-preload-190877" is "Ready"
	I0110 02:26:29.051952  327170 pod_ready.go:86] duration metric: took 186.403273ms for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:29.251770  327170 pod_ready.go:83] waiting for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:29.651751  327170 pod_ready.go:94] pod "kube-proxy-hrztb" is "Ready"
	I0110 02:26:29.651782  327170 pod_ready.go:86] duration metric: took 399.975949ms for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:29.852180  327170 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:30.252397  327170 pod_ready.go:94] pod "kube-scheduler-no-preload-190877" is "Ready"
	I0110 02:26:30.252424  327170 pod_ready.go:86] duration metric: took 400.217373ms for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:30.252447  327170 pod_ready.go:40] duration metric: took 39.406842868s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:30.296441  327170 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:26:30.299390  327170 out.go:179] * Done! kubectl is now configured to use "no-preload-190877" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:25:59 old-k8s-version-188604 crio[574]: time="2026-01-10T02:25:59.90439991Z" level=info msg="Started container" PID=1779 containerID=894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper id=825a4d68-bd94-47be-8e2d-19ff3a7e36c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7061713ec27812cd2595f4d9823c8353c5ce05a2983b0531073cfa81e35681c2
	Jan 10 02:26:00 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:00.84700936Z" level=info msg="Removing container: ec6e54552dbb815ecf2c92ebe2982198f86e2bafa507070133556fc036dadff4" id=db10a1c9-a677-4d4f-b94f-c7f334e1f9e8 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:00 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:00.856262483Z" level=info msg="Removed container ec6e54552dbb815ecf2c92ebe2982198f86e2bafa507070133556fc036dadff4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper" id=db10a1c9-a677-4d4f-b94f-c7f334e1f9e8 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.871098853Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fddad4c2-b813-4d91-ab23-09d450c38bef name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.872012716Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c4c09957-dd63-483a-9d7d-0e646cd91874 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.873004145Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c4236c8b-9d46-479c-9829-cf731b8a41ed name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.873147755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.877665334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.877841316Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/92c3be459fff5f82fbfefc81122934eb344eb8d67c42913062baa4e802055b81/merged/etc/passwd: no such file or directory"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.877868256Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/92c3be459fff5f82fbfefc81122934eb344eb8d67c42913062baa4e802055b81/merged/etc/group: no such file or directory"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.878115443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.904377286Z" level=info msg="Created container 3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e: kube-system/storage-provisioner/storage-provisioner" id=c4236c8b-9d46-479c-9829-cf731b8a41ed name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.904870998Z" level=info msg="Starting container: 3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e" id=e84ba7d9-2425-46d7-a622-93ec781efc69 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.906553502Z" level=info msg="Started container" PID=1793 containerID=3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e description=kube-system/storage-provisioner/storage-provisioner id=e84ba7d9-2425-46d7-a622-93ec781efc69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d04ac23222b33ba2c42fee6a0a3e7b100eaadd4b852928775d3f51d0e27e16d8
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.761159615Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=39d641d6-07ec-4824-baaa-c3bb699fde8f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.762165892Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=925388ab-aa85-4a9c-b1df-d138c0ce9d5e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.763254942Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper" id=f3855ee6-6da8-4ae2-8837-fc74ef3bdacf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.763423742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.77009621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.770877812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.81180073Z" level=info msg="Created container 7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper" id=f3855ee6-6da8-4ae2-8837-fc74ef3bdacf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.812496925Z" level=info msg="Starting container: 7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50" id=b62937f1-cf05-4e10-a86e-2e4b4be440ae name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.814517026Z" level=info msg="Started container" PID=1812 containerID=7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper id=b62937f1-cf05-4e10-a86e-2e4b4be440ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=7061713ec27812cd2595f4d9823c8353c5ce05a2983b0531073cfa81e35681c2
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.888409348Z" level=info msg="Removing container: 894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870" id=f7fc5fa6-54aa-40e1-815f-041c10770e86 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.897544924Z" level=info msg="Removed container 894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper" id=f7fc5fa6-54aa-40e1-815f-041c10770e86 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	7e64336c4e44a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   7061713ec2781       dashboard-metrics-scraper-5f989dc9cf-qgv79       kubernetes-dashboard
	3e46f34495823       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   d04ac23222b33       storage-provisioner                              kube-system
	63fa889860ac4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   febd3354b9f8c       kubernetes-dashboard-8694d4445c-lq5lf            kubernetes-dashboard
	3eef71caacb42       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   e2da0b126a67a       coredns-5dd5756b68-vc68c                         kube-system
	d4fa616c8db72       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   7ceaf155b9d71       busybox                                          default
	832e7427e079d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   79f349779dbd1       kindnet-25dkr                                    kube-system
	2994793acd647       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   d04ac23222b33       storage-provisioner                              kube-system
	acf5b5647b7a2       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   303924485e629       kube-proxy-c445q                                 kube-system
	861ce74c9faf0       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   c57bcead224d0       kube-scheduler-old-k8s-version-188604            kube-system
	c7891e84c7b07       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   d830b2ea5a725       kube-controller-manager-old-k8s-version-188604   kube-system
	a022cc94e780e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   ac3902f957dcf       kube-apiserver-old-k8s-version-188604            kube-system
	583aa40ef23fe       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   4d952430c221f       etcd-old-k8s-version-188604                      kube-system
	
	
	==> coredns [3eef71caacb4290f3264c8c7c1487a2a3a32057cebca2adde7fb5c9b5446e232] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43656 - 16341 "HINFO IN 3070078427942198497.177982554793797762. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.078035392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-188604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-188604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=old-k8s-version-188604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-188604
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:10 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:10 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:10 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:26:10 +0000   Sat, 10 Jan 2026 02:25:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-188604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                8835f89d-8806-4482-b07d-960e07e8dff0
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-vc68c                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-188604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-25dkr                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-188604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-188604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-c445q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-188604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-qgv79        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-lq5lf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-188604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node old-k8s-version-188604 event: Registered Node old-k8s-version-188604 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-188604 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node old-k8s-version-188604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node old-k8s-version-188604 event: Registered Node old-k8s-version-188604 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [583aa40ef23feaf98f416116520b822f7fed26e3509ae9a5afe569be8de6ceff] <==
	{"level":"info","ts":"2026-01-10T02:25:37.370694Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:25:37.370728Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:25:37.370987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-10T02:25:37.371114Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2026-01-10T02:25:37.371273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:25:37.371443Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:25:37.376736Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T02:25:37.377022Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:25:37.377092Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:25:37.377202Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:37.377239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:38.65739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:38.657449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:38.657489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:38.657503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:38.657508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:38.657516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:38.657523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:38.658811Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:38.65883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:38.658814Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-188604 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:25:38.659049Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:38.659075Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:38.660141Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:25:38.660211Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 02:26:32 up  1:09,  0 user,  load average: 3.50, 3.51, 2.38
	Linux old-k8s-version-188604 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [832e7427e079dec6ed5e1274fdf2e96dc09e2cf11be39eb0ad4d7eb590ba7cb0] <==
	I0110 02:25:40.427646       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:25:40.427851       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:25:40.428028       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:25:40.428053       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:25:40.428069       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:25:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:25:40.725178       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:25:40.725212       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:25:40.725226       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:25:40.727069       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:25:41.025467       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:25:41.025558       1 metrics.go:72] Registering metrics
	I0110 02:25:41.025707       1 controller.go:711] "Syncing nftables rules"
	I0110 02:25:50.725994       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:25:50.726046       1 main.go:301] handling current node
	I0110 02:26:00.725956       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:26:00.725984       1 main.go:301] handling current node
	I0110 02:26:10.725014       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:26:10.725066       1 main.go:301] handling current node
	I0110 02:26:20.726614       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:26:20.726647       1 main.go:301] handling current node
	I0110 02:26:30.731697       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:26:30.731727       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a022cc94e780e8ed928e70a6eda0944c970eecd9d4d3e3af71a2fd593d685500] <==
	I0110 02:25:39.564074       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0110 02:25:39.664035       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 02:25:39.664090       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 02:25:39.664100       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 02:25:39.664119       1 aggregator.go:166] initial CRD sync complete...
	I0110 02:25:39.664126       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 02:25:39.664132       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:25:39.664137       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:25:39.664269       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:25:39.664275       1 shared_informer.go:318] Caches are synced for configmaps
	I0110 02:25:39.664295       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 02:25:39.664517       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 02:25:39.691242       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0110 02:25:39.699735       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:25:40.568361       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 02:25:40.644614       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 02:25:40.679714       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 02:25:40.698645       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:25:40.708862       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:25:40.720387       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 02:25:40.762165       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.128.110"}
	I0110 02:25:40.777911       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.221.81"}
	I0110 02:25:52.345792       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:25:52.350365       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 02:25:52.513373       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c7891e84c7b07fa814892b06d908aeaae4f1e237406a3b4e0c937ca6047439f5] <==
	I0110 02:25:52.524263       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-lq5lf"
	I0110 02:25:52.524330       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 02:25:52.526134       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-qgv79"
	I0110 02:25:52.528793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.205692ms"
	I0110 02:25:52.533438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.370765ms"
	I0110 02:25:52.537839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.004994ms"
	I0110 02:25:52.550057       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 02:25:52.550239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.754174ms"
	I0110 02:25:52.550316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.836µs"
	I0110 02:25:52.550392       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.737µs"
	I0110 02:25:52.551164       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.273949ms"
	I0110 02:25:52.551338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.832µs"
	I0110 02:25:52.558070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.404µs"
	I0110 02:25:52.869461       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:25:52.946352       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:25:52.946397       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 02:25:57.858163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.238279ms"
	I0110 02:25:57.858269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.575µs"
	I0110 02:25:59.866452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="280.113µs"
	I0110 02:26:00.856500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.116µs"
	I0110 02:26:01.893937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.32µs"
	I0110 02:26:16.492487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.081821ms"
	I0110 02:26:16.492703       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="167.395µs"
	I0110 02:26:16.899247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.362µs"
	I0110 02:26:22.849416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.262µs"
	
	
	==> kube-proxy [acf5b5647b7a260868d1a73059bea70c514cbda74b322acd6ecef0169e38684f] <==
	I0110 02:25:40.241685       1 server_others.go:69] "Using iptables proxy"
	I0110 02:25:40.259042       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0110 02:25:40.286904       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:25:40.291286       1 server_others.go:152] "Using iptables Proxier"
	I0110 02:25:40.291322       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 02:25:40.291332       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 02:25:40.291363       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 02:25:40.291640       1 server.go:846] "Version info" version="v1.28.0"
	I0110 02:25:40.291662       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:40.292237       1 config.go:97] "Starting endpoint slice config controller"
	I0110 02:25:40.292272       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 02:25:40.292355       1 config.go:188] "Starting service config controller"
	I0110 02:25:40.292360       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 02:25:40.292392       1 config.go:315] "Starting node config controller"
	I0110 02:25:40.292398       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 02:25:40.392416       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 02:25:40.392640       1 shared_informer.go:318] Caches are synced for service config
	I0110 02:25:40.392655       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [861ce74c9faf076868c60d47909834154c9a2f93ac74567527702fb1423497f3] <==
	W0110 02:25:39.630411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0110 02:25:39.630450       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0110 02:25:39.630551       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	E0110 02:25:39.630579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	W0110 02:25:39.630704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.630899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.630946       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0110 02:25:39.630968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0110 02:25:39.631596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.631629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.631805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.631828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.632054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0110 02:25:39.632082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.632081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0110 02:25:39.632099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.632370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.634221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.634037       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0110 02:25:39.634284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0110 02:25:39.634094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0110 02:25:39.634301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0110 02:25:39.634159       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.634317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	I0110 02:25:40.618813       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.534834     734 topology_manager.go:215] "Topology Admit Handler" podUID="092a7a28-4eb5-4624-b51a-a672142e3519" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-qgv79"
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.677329     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/44e589a7-3475-4e98-95fc-c5f990e17892-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-lq5lf\" (UID: \"44e589a7-3475-4e98-95fc-c5f990e17892\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lq5lf"
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.677438     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd5b5\" (UniqueName: \"kubernetes.io/projected/092a7a28-4eb5-4624-b51a-a672142e3519-kube-api-access-dd5b5\") pod \"dashboard-metrics-scraper-5f989dc9cf-qgv79\" (UID: \"092a7a28-4eb5-4624-b51a-a672142e3519\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79"
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.677494     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blts5\" (UniqueName: \"kubernetes.io/projected/44e589a7-3475-4e98-95fc-c5f990e17892-kube-api-access-blts5\") pod \"kubernetes-dashboard-8694d4445c-lq5lf\" (UID: \"44e589a7-3475-4e98-95fc-c5f990e17892\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lq5lf"
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.677557     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/092a7a28-4eb5-4624-b51a-a672142e3519-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-qgv79\" (UID: \"092a7a28-4eb5-4624-b51a-a672142e3519\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79"
	Jan 10 02:25:59 old-k8s-version-188604 kubelet[734]: I0110 02:25:59.842076     734 scope.go:117] "RemoveContainer" containerID="ec6e54552dbb815ecf2c92ebe2982198f86e2bafa507070133556fc036dadff4"
	Jan 10 02:25:59 old-k8s-version-188604 kubelet[734]: I0110 02:25:59.864031     734 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lq5lf" podStartSLOduration=3.732206032 podCreationTimestamp="2026-01-10 02:25:52 +0000 UTC" firstStartedPulling="2026-01-10 02:25:52.866499857 +0000 UTC m=+16.211790880" lastFinishedPulling="2026-01-10 02:25:56.99824478 +0000 UTC m=+20.343535795" observedRunningTime="2026-01-10 02:25:57.851388128 +0000 UTC m=+21.196679173" watchObservedRunningTime="2026-01-10 02:25:59.863950947 +0000 UTC m=+23.209242041"
	Jan 10 02:26:00 old-k8s-version-188604 kubelet[734]: I0110 02:26:00.845624     734 scope.go:117] "RemoveContainer" containerID="ec6e54552dbb815ecf2c92ebe2982198f86e2bafa507070133556fc036dadff4"
	Jan 10 02:26:00 old-k8s-version-188604 kubelet[734]: I0110 02:26:00.845807     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:00 old-k8s-version-188604 kubelet[734]: E0110 02:26:00.846185     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:01 old-k8s-version-188604 kubelet[734]: I0110 02:26:01.849071     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:01 old-k8s-version-188604 kubelet[734]: E0110 02:26:01.849406     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:02 old-k8s-version-188604 kubelet[734]: I0110 02:26:02.851711     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:02 old-k8s-version-188604 kubelet[734]: E0110 02:26:02.852027     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:10 old-k8s-version-188604 kubelet[734]: I0110 02:26:10.870586     734 scope.go:117] "RemoveContainer" containerID="2994793acd647dbf48fd7155eab6331a96f311accd8a9212a55f571d61b00119"
	Jan 10 02:26:16 old-k8s-version-188604 kubelet[734]: I0110 02:26:16.760472     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:16 old-k8s-version-188604 kubelet[734]: I0110 02:26:16.887249     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:16 old-k8s-version-188604 kubelet[734]: I0110 02:26:16.887489     734 scope.go:117] "RemoveContainer" containerID="7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50"
	Jan 10 02:26:16 old-k8s-version-188604 kubelet[734]: E0110 02:26:16.887827     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:22 old-k8s-version-188604 kubelet[734]: I0110 02:26:22.837236     734 scope.go:117] "RemoveContainer" containerID="7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50"
	Jan 10 02:26:22 old-k8s-version-188604 kubelet[734]: E0110 02:26:22.837693     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:30 old-k8s-version-188604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:26:30 old-k8s-version-188604 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:26:30 old-k8s-version-188604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:26:30 old-k8s-version-188604 systemd[1]: kubelet.service: Consumed 1.522s CPU time.
	
	
	==> kubernetes-dashboard [63fa889860ac4c551c147aa893d7114cdb4799c26166f95c40f2c08d1a1f8641] <==
	2026/01/10 02:25:57 Using namespace: kubernetes-dashboard
	2026/01/10 02:25:57 Using in-cluster config to connect to apiserver
	2026/01/10 02:25:57 Using secret token for csrf signing
	2026/01/10 02:25:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:25:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:25:57 Successful initial request to the apiserver, version: v1.28.0
	2026/01/10 02:25:57 Generating JWE encryption key
	2026/01/10 02:25:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:25:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:25:57 Initializing JWE encryption key from synchronized object
	2026/01/10 02:25:57 Creating in-cluster Sidecar client
	2026/01/10 02:25:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:25:57 Serving insecurely on HTTP port: 9090
	2026/01/10 02:26:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:25:57 Starting overwatch
	
	
	==> storage-provisioner [2994793acd647dbf48fd7155eab6331a96f311accd8a9212a55f571d61b00119] <==
	I0110 02:25:40.197785       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:26:10.203111       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e] <==
	I0110 02:26:10.917925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:26:10.925595       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:26:10.925641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 02:26:28.322468       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:26:28.322597       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9dd8f8c3-352c-4a42-bd82-a8d8489739cb", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-188604_fe0624cd-2670-4404-94e0-ce389174df7b became leader
	I0110 02:26:28.322641       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-188604_fe0624cd-2670-4404-94e0-ce389174df7b!
	I0110 02:26:28.422816       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-188604_fe0624cd-2670-4404-94e0-ce389174df7b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188604 -n old-k8s-version-188604
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188604 -n old-k8s-version-188604: exit status 2 (330.835675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-188604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-188604
helpers_test.go:244: (dbg) docker inspect old-k8s-version-188604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339",
	        "Created": "2026-01-10T02:24:20.28221194Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324437,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:25:30.690182836Z",
	            "FinishedAt": "2026-01-10T02:25:29.798989359Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/hostname",
	        "HostsPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/hosts",
	        "LogPath": "/var/lib/docker/containers/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339/d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339-json.log",
	        "Name": "/old-k8s-version-188604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-188604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-188604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d326f6fd278c8e3f5e1c5158ac7eaa918a8656ad5175a2c61bec0342993ee339",
	                "LowerDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/decdf227318c44fe92cc9f6c020a718b43e24e63c1b9dc9404ee3a93d27ae9aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-188604",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-188604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-188604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-188604",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-188604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9e7d9d480244b5134f253387bfe30b1a48ee2c5c4e005be71a4f524b8b78a2e9",
	            "SandboxKey": "/var/run/docker/netns/9e7d9d480244",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-188604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5bb0788a00cda98d7846d64fc1fe66eb98afdb7fd381de926d036feac84ba741",
	                    "EndpointID": "acdb1d80437cbc9317afbab691f1dd756c766bcd8506b5b7ddecbe0fc7fe8778",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1e:f1:cb:af:42:3b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-188604",
	                        "d326f6fd278c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188604 -n old-k8s-version-188604
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188604 -n old-k8s-version-188604: exit status 2 (323.383063ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-188604 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-188604 logs -n 25: (1.058928287s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-647049 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ ssh     │ -p bridge-647049 sudo crio config                                                                                                                                                                                                             │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:24 UTC │
	│ delete  │ -p bridge-647049                                                                                                                                                                                                                              │ bridge-647049                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │ 10 Jan 26 02:25 UTC │
	│ delete  │ -p disable-driver-mounts-249405                                                                                                                                                                                                               │ disable-driver-mounts-249405 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p old-k8s-version-188604 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p embed-certs-872415 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p no-preload-190877 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-188604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-313784 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:07.831058  333054 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:07.831142  333054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:07.831149  333054 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:07.831154  333054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:07.831353  333054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:07.831767  333054 out.go:368] Setting JSON to false
	I0110 02:26:07.833073  333054 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4117,"bootTime":1768007851,"procs":487,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:07.833123  333054 start.go:143] virtualization: kvm guest
	I0110 02:26:07.834821  333054 out.go:179] * [default-k8s-diff-port-313784] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:07.835923  333054 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:07.835916  333054 notify.go:221] Checking for updates...
	I0110 02:26:07.837984  333054 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:07.839482  333054 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:07.840535  333054 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:07.841623  333054 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:07.842615  333054 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:07.843939  333054 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:07.844481  333054 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:07.870643  333054 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:07.870733  333054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:07.924092  333054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 02:26:07.913649955 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:07.924182  333054 docker.go:319] overlay module found
	I0110 02:26:07.925941  333054 out.go:179] * Using the docker driver based on existing profile
	I0110 02:26:07.927132  333054 start.go:309] selected driver: docker
	I0110 02:26:07.927147  333054 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:07.927245  333054 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:07.927765  333054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:07.984519  333054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 02:26:07.973516651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:07.984910  333054 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:26:07.984954  333054 cni.go:84] Creating CNI manager for ""
	I0110 02:26:07.985023  333054 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:07.985069  333054 start.go:353] cluster config:
	{Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:07.987403  333054 out.go:179] * Starting "default-k8s-diff-port-313784" primary control-plane node in "default-k8s-diff-port-313784" cluster
	I0110 02:26:07.988483  333054 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:07.989508  333054 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:07.990567  333054 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:07.990609  333054 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:07.990625  333054 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:07.990663  333054 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:07.990716  333054 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:07.990730  333054 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:07.990843  333054 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:26:08.010250  333054 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:08.010267  333054 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:08.010283  333054 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:08.010310  333054 start.go:360] acquireMachinesLock for default-k8s-diff-port-313784: {Name:mk0116f4190c69f6825824fe0766dd2c4c328e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:08.010368  333054 start.go:364] duration metric: took 34.56µs to acquireMachinesLock for "default-k8s-diff-port-313784"
	I0110 02:26:08.010391  333054 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:26:08.010398  333054 fix.go:54] fixHost starting: 
	I0110 02:26:08.010597  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:08.027131  333054 fix.go:112] recreateIfNeeded on default-k8s-diff-port-313784: state=Stopped err=<nil>
	W0110 02:26:08.027155  333054 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 02:26:07.341710  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:26:09.841691  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:26:06.854382  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:09.353495  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:07.863467  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	W0110 02:26:10.363618  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	I0110 02:26:08.028712  333054 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-313784" ...
	I0110 02:26:08.028768  333054 cli_runner.go:164] Run: docker start default-k8s-diff-port-313784
	I0110 02:26:08.282152  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:08.300901  333054 kic.go:430] container "default-k8s-diff-port-313784" state is running.
	I0110 02:26:08.301231  333054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:26:08.320668  333054 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/config.json ...
	I0110 02:26:08.320867  333054 machine.go:94] provisionDockerMachine start ...
	I0110 02:26:08.320939  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:08.339117  333054 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:08.339402  333054 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I0110 02:26:08.339424  333054 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:26:08.340210  333054 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33762->127.0.0.1:33125: read: connection reset by peer
	I0110 02:26:11.469516  333054 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:26:11.469542  333054 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-313784"
	I0110 02:26:11.469598  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:11.487103  333054 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:11.487320  333054 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I0110 02:26:11.487334  333054 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313784 && echo "default-k8s-diff-port-313784" | sudo tee /etc/hostname
	I0110 02:26:11.623549  333054 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313784
	
	I0110 02:26:11.623651  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:11.642386  333054 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:11.642658  333054 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I0110 02:26:11.642685  333054 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313784/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:26:11.768142  333054 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:26:11.768166  333054 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:26:11.768200  333054 ubuntu.go:190] setting up certificates
	I0110 02:26:11.768222  333054 provision.go:84] configureAuth start
	I0110 02:26:11.768283  333054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:26:11.785869  333054 provision.go:143] copyHostCerts
	I0110 02:26:11.785947  333054 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:26:11.785966  333054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:26:11.786052  333054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:26:11.786193  333054 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:26:11.786206  333054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:26:11.786244  333054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:26:11.786340  333054 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:26:11.786366  333054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:26:11.786407  333054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:26:11.786497  333054 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313784 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-313784 localhost minikube]
	I0110 02:26:11.887148  333054 provision.go:177] copyRemoteCerts
	I0110 02:26:11.887207  333054 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:26:11.887242  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:11.905478  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:11.999168  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:26:12.016727  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 02:26:12.033427  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:26:12.050142  333054 provision.go:87] duration metric: took 281.897325ms to configureAuth
	I0110 02:26:12.050172  333054 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:26:12.050373  333054 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:12.050516  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.068396  333054 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:12.068611  333054 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I0110 02:26:12.068628  333054 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:26:12.375106  333054 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:26:12.375134  333054 machine.go:97] duration metric: took 4.054253767s to provisionDockerMachine
	I0110 02:26:12.375150  333054 start.go:293] postStartSetup for "default-k8s-diff-port-313784" (driver="docker")
	I0110 02:26:12.375165  333054 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:26:12.375227  333054 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:26:12.375277  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.395289  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:12.488306  333054 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:26:12.491639  333054 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:26:12.491661  333054 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:26:12.491670  333054 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:26:12.491718  333054 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:26:12.491784  333054 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:26:12.491863  333054 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:26:12.499277  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:12.516144  333054 start.go:296] duration metric: took 140.981819ms for postStartSetup
	I0110 02:26:12.516201  333054 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:26:12.516256  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.533744  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:12.622553  333054 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:26:12.627052  333054 fix.go:56] duration metric: took 4.616648935s for fixHost
	I0110 02:26:12.627073  333054 start.go:83] releasing machines lock for "default-k8s-diff-port-313784", held for 4.616695447s
	I0110 02:26:12.627125  333054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-313784
	I0110 02:26:12.644754  333054 ssh_runner.go:195] Run: cat /version.json
	I0110 02:26:12.644804  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.644858  333054 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:26:12.644938  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:12.662804  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:12.663791  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:12.813601  333054 ssh_runner.go:195] Run: systemctl --version
	I0110 02:26:12.819979  333054 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:26:12.855500  333054 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:26:12.860375  333054 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:26:12.860433  333054 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:26:12.868544  333054 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:26:12.868568  333054 start.go:496] detecting cgroup driver to use...
	I0110 02:26:12.868593  333054 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:26:12.868631  333054 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:26:12.882972  333054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:26:12.894798  333054 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:26:12.894842  333054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:26:12.908207  333054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:26:12.919519  333054 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:26:12.999754  333054 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:26:13.080569  333054 docker.go:234] disabling docker service ...
	I0110 02:26:13.080624  333054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:26:13.094480  333054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:26:13.106467  333054 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:26:13.188001  333054 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:26:13.271255  333054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:26:13.283121  333054 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:26:13.297206  333054 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:26:13.297269  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.305573  333054 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:26:13.305622  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.313895  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.322382  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.330475  333054 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:26:13.338061  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.346980  333054 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.355263  333054 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:13.364008  333054 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:26:13.370833  333054 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:26:13.377742  333054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:13.455369  333054 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:26:13.589378  333054 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:26:13.589435  333054 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:26:13.593369  333054 start.go:574] Will wait 60s for crictl version
	I0110 02:26:13.593438  333054 ssh_runner.go:195] Run: which crictl
	I0110 02:26:13.596768  333054 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:26:13.622538  333054 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:26:13.622620  333054 ssh_runner.go:195] Run: crio --version
	I0110 02:26:13.648450  333054 ssh_runner.go:195] Run: crio --version
	I0110 02:26:13.676132  333054 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:26:13.677171  333054 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-313784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:13.694435  333054 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0110 02:26:13.698314  333054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:13.708123  333054 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:26:13.708230  333054 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:13.708285  333054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:13.742117  333054 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:13.742140  333054 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:26:13.742191  333054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:13.766290  333054 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:13.766313  333054 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:26:13.766321  333054 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.35.0 crio true true} ...
	I0110 02:26:13.766406  333054 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-313784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:26:13.766469  333054 ssh_runner.go:195] Run: crio config
	I0110 02:26:13.809345  333054 cni.go:84] Creating CNI manager for ""
	I0110 02:26:13.809369  333054 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:13.809384  333054 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:26:13.809407  333054 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313784 NodeName:default-k8s-diff-port-313784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:26:13.809519  333054 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313784"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:26:13.809576  333054 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:26:13.817571  333054 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:26:13.817640  333054 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:26:13.824947  333054 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 02:26:13.836838  333054 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:26:13.848954  333054 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 02:26:13.862017  333054 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:26:13.865658  333054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:13.875387  333054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:13.956102  333054 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:26:13.981062  333054 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784 for IP: 192.168.94.2
	I0110 02:26:13.981083  333054 certs.go:195] generating shared ca certs ...
	I0110 02:26:13.981099  333054 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:13.981247  333054 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:26:13.981287  333054 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:26:13.981296  333054 certs.go:257] generating profile certs ...
	I0110 02:26:13.981392  333054 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/client.key
	I0110 02:26:13.981458  333054 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key.9158e13d
	I0110 02:26:13.981494  333054 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key
	I0110 02:26:13.981593  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:26:13.981630  333054 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:26:13.981641  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:26:13.981666  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:26:13.981691  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:26:13.981715  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:26:13.981754  333054 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:13.982380  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:26:14.002823  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:26:14.022178  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:26:14.041321  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:26:14.062771  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 02:26:14.083376  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:26:14.101344  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:26:14.118232  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/default-k8s-diff-port-313784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:26:14.134901  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:26:14.151403  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:26:14.167997  333054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:26:14.185731  333054 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:26:14.197712  333054 ssh_runner.go:195] Run: openssl version
	I0110 02:26:14.203568  333054 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:26:14.210454  333054 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:26:14.217293  333054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:26:14.220958  333054 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:26:14.221008  333054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:26:14.255604  333054 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:26:14.262599  333054 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:14.269458  333054 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:26:14.276589  333054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:14.279973  333054 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:14.280016  333054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:14.314627  333054 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:26:14.322662  333054 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:26:14.329919  333054 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:26:14.337288  333054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:26:14.341459  333054 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:26:14.341522  333054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:26:14.381801  333054 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:26:14.389574  333054 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:26:14.393174  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:26:14.428232  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:26:14.462933  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:26:14.504201  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:26:14.553987  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:26:14.603619  333054 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:26:14.652310  333054 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-313784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-313784 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:14.652403  333054 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:26:14.652478  333054 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:26:14.683632  333054 cri.go:96] found id: "35cfd8caca1ffb3ed069875a6f4df02737c571e205d4cb57ddce696a7018cd87"
	I0110 02:26:14.683657  333054 cri.go:96] found id: "fc29eda71f4bde30696f3da25f43c0e08c5a51d939a947924ad7303cd468a80f"
	I0110 02:26:14.683663  333054 cri.go:96] found id: "b5de7f05c48c095e9fef4efb74abefe8eb07be5b286dca9f1e02db1c8c79c371"
	I0110 02:26:14.683672  333054 cri.go:96] found id: "6f7b3a029a3bc4ba4e3633368af6270be9e6945d669d649d76e7070308610a5d"
	I0110 02:26:14.683677  333054 cri.go:96] found id: ""
	I0110 02:26:14.683722  333054 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:26:14.696074  333054 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:14Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:26:14.696137  333054 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:26:14.704953  333054 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:26:14.704972  333054 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:26:14.705015  333054 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:26:14.712419  333054 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:26:14.713669  333054 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-313784" does not appear in /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:14.714578  333054 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-10552/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-313784" cluster setting kubeconfig missing "default-k8s-diff-port-313784" context setting]
	I0110 02:26:14.715863  333054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:14.718193  333054 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:26:14.726039  333054 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I0110 02:26:14.726065  333054 kubeadm.go:602] duration metric: took 21.086368ms to restartPrimaryControlPlane
	I0110 02:26:14.726075  333054 kubeadm.go:403] duration metric: took 73.773963ms to StartCluster
	I0110 02:26:14.726090  333054 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:14.726146  333054 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:14.728022  333054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:14.728258  333054 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:14.728335  333054 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:26:14.728441  333054 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-313784"
	I0110 02:26:14.728461  333054 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-313784"
	W0110 02:26:14.728472  333054 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:26:14.728500  333054 host.go:66] Checking if "default-k8s-diff-port-313784" exists ...
	I0110 02:26:14.728515  333054 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:14.728508  333054 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-313784"
	I0110 02:26:14.728521  333054 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-313784"
	I0110 02:26:14.728542  333054 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-313784"
	W0110 02:26:14.728552  333054 addons.go:248] addon dashboard should already be in state true
	I0110 02:26:14.728555  333054 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-313784"
	I0110 02:26:14.728592  333054 host.go:66] Checking if "default-k8s-diff-port-313784" exists ...
	I0110 02:26:14.728874  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:14.728984  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:14.729045  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:14.732296  333054 out.go:179] * Verifying Kubernetes components...
	I0110 02:26:14.733473  333054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:14.754266  333054 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-313784"
	W0110 02:26:14.754286  333054 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:26:14.754310  333054 host.go:66] Checking if "default-k8s-diff-port-313784" exists ...
	I0110 02:26:14.754696  333054 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:26:14.754760  333054 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:26:14.754821  333054 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:26:14.756144  333054 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:26:14.756164  333054 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:26:14.756219  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:14.756240  333054 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W0110 02:26:12.342036  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:26:14.342305  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	W0110 02:26:11.354082  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:13.853969  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:12.862878  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	W0110 02:26:14.863968  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	I0110 02:26:14.757517  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:26:14.757537  333054 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:26:14.757593  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:14.784678  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:14.789235  333054 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:26:14.789258  333054 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:26:14.789315  333054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:26:14.799372  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:14.817340  333054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:26:14.887998  333054 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:26:14.901669  333054 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:26:14.901822  333054 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-313784" to be "Ready" ...
	I0110 02:26:14.912232  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:26:14.912252  333054 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:26:14.926146  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:26:14.926179  333054 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:26:14.930180  333054 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:26:14.940513  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:26:14.940536  333054 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:26:14.955099  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:26:14.955220  333054 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:26:14.969871  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:26:14.969913  333054 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:26:14.984030  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:26:14.984048  333054 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:26:14.997651  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:26:14.997731  333054 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:26:15.009841  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:26:15.009865  333054 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:26:15.021979  333054 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:26:15.021997  333054 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:26:15.033816  333054 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:26:16.424130  333054 node_ready.go:49] node "default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:16.424164  333054 node_ready.go:38] duration metric: took 1.522303458s for node "default-k8s-diff-port-313784" to be "Ready" ...
	I0110 02:26:16.424180  333054 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:26:16.424229  333054 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:26:16.940709  333054 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.039005236s)
	I0110 02:26:16.940756  333054 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.010552905s)
	I0110 02:26:16.940876  333054 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.907021204s)
	I0110 02:26:16.940923  333054 api_server.go:72] duration metric: took 2.212633663s to wait for apiserver process to appear ...
	I0110 02:26:16.940937  333054 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:26:16.940973  333054 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0110 02:26:16.944018  333054 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-313784 addons enable metrics-server
	
	I0110 02:26:16.945387  333054 api_server.go:325] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:26:16.945409  333054 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:26:16.947599  333054 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:26:16.948696  333054 addons.go:530] duration metric: took 2.220379713s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:26:17.441512  333054 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0110 02:26:17.447158  333054 api_server.go:325] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:26:17.447192  333054 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:26:16.342464  324231 pod_ready.go:104] pod "coredns-5dd5756b68-vc68c" is not "Ready", error: <nil>
	I0110 02:26:16.843631  324231 pod_ready.go:94] pod "coredns-5dd5756b68-vc68c" is "Ready"
	I0110 02:26:16.843664  324231 pod_ready.go:86] duration metric: took 36.0074275s for pod "coredns-5dd5756b68-vc68c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.846562  324231 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.852708  324231 pod_ready.go:94] pod "etcd-old-k8s-version-188604" is "Ready"
	I0110 02:26:16.852729  324231 pod_ready.go:86] duration metric: took 6.144531ms for pod "etcd-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.855561  324231 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.859755  324231 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-188604" is "Ready"
	I0110 02:26:16.859776  324231 pod_ready.go:86] duration metric: took 4.190621ms for pod "kube-apiserver-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:16.862770  324231 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:17.039720  324231 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-188604" is "Ready"
	I0110 02:26:17.039743  324231 pod_ready.go:86] duration metric: took 176.948921ms for pod "kube-controller-manager-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:17.240978  324231 pod_ready.go:83] waiting for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:17.640874  324231 pod_ready.go:94] pod "kube-proxy-c445q" is "Ready"
	I0110 02:26:17.640917  324231 pod_ready.go:86] duration metric: took 399.91418ms for pod "kube-proxy-c445q" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:17.840732  324231 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:18.240702  324231 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-188604" is "Ready"
	I0110 02:26:18.240726  324231 pod_ready.go:86] duration metric: took 399.96876ms for pod "kube-scheduler-old-k8s-version-188604" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:18.240738  324231 pod_ready.go:40] duration metric: took 37.40938402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:18.285679  324231 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I0110 02:26:18.287038  324231 out.go:203] 
	W0110 02:26:18.288125  324231 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 02:26:18.289214  324231 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:26:18.290304  324231 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-188604" cluster and "default" namespace by default
	W0110 02:26:15.854580  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:18.355311  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:20.355617  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:16.864419  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	W0110 02:26:19.362850  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	W0110 02:26:21.363778  325613 pod_ready.go:104] pod "coredns-7d764666f9-lfdgm" is not "Ready", error: <nil>
	I0110 02:26:17.941955  333054 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0110 02:26:17.946088  333054 api_server.go:325] https://192.168.94.2:8444/healthz returned 200:
	ok
	I0110 02:26:17.947035  333054 api_server.go:141] control plane version: v1.35.0
	I0110 02:26:17.947057  333054 api_server.go:131] duration metric: took 1.006110565s to wait for apiserver health ...
	I0110 02:26:17.947069  333054 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:26:17.950502  333054 system_pods.go:59] 8 kube-system pods found
	I0110 02:26:17.950533  333054 system_pods.go:61] "coredns-7d764666f9-rhgg5" [7b2c9aeb-37f2-4c60-ac35-a17f643dba15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:26:17.950544  333054 system_pods.go:61] "etcd-default-k8s-diff-port-313784" [b49d0042-7385-49c7-ba65-5a452ae99050] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:26:17.950551  333054 system_pods.go:61] "kindnet-wbscw" [4ad21b3c-b663-4ee1-b481-19655b22e160] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:26:17.950561  333054 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313784" [f59fcb0e-e243-46f2-aa8e-beda31fa8454] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:26:17.950574  333054 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313784" [fb4fc971-af17-4755-ad89-b9926ae3f9fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:26:17.950599  333054 system_pods.go:61] "kube-proxy-6dcdf" [e2cb4683-cef0-4b78-9044-a209d81b5ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:26:17.950608  333054 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313784" [beee3e76-d418-4002-974d-39fd6cd498e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:26:17.950627  333054 system_pods.go:61] "storage-provisioner" [5576c2bc-6ca6-49ef-98e4-27f810e200c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:26:17.950637  333054 system_pods.go:74] duration metric: took 3.563113ms to wait for pod list to return data ...
	I0110 02:26:17.950646  333054 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:26:17.952901  333054 default_sa.go:45] found service account: "default"
	I0110 02:26:17.952920  333054 default_sa.go:55] duration metric: took 2.266196ms for default service account to be created ...
	I0110 02:26:17.952928  333054 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:26:17.955237  333054 system_pods.go:86] 8 kube-system pods found
	I0110 02:26:17.955264  333054 system_pods.go:89] "coredns-7d764666f9-rhgg5" [7b2c9aeb-37f2-4c60-ac35-a17f643dba15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:26:17.955275  333054 system_pods.go:89] "etcd-default-k8s-diff-port-313784" [b49d0042-7385-49c7-ba65-5a452ae99050] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:26:17.955284  333054 system_pods.go:89] "kindnet-wbscw" [4ad21b3c-b663-4ee1-b481-19655b22e160] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:26:17.955297  333054 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-313784" [f59fcb0e-e243-46f2-aa8e-beda31fa8454] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:26:17.955307  333054 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-313784" [fb4fc971-af17-4755-ad89-b9926ae3f9fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:26:17.955319  333054 system_pods.go:89] "kube-proxy-6dcdf" [e2cb4683-cef0-4b78-9044-a209d81b5ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:26:17.955360  333054 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-313784" [beee3e76-d418-4002-974d-39fd6cd498e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:26:17.955377  333054 system_pods.go:89] "storage-provisioner" [5576c2bc-6ca6-49ef-98e4-27f810e200c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:26:17.955389  333054 system_pods.go:126] duration metric: took 2.454636ms to wait for k8s-apps to be running ...
	I0110 02:26:17.955400  333054 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:26:17.955456  333054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:17.968328  333054 system_svc.go:56] duration metric: took 12.920893ms WaitForService to wait for kubelet
	I0110 02:26:17.968357  333054 kubeadm.go:587] duration metric: took 3.240069063s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:26:17.968378  333054 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:26:17.971097  333054 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:26:17.971123  333054 node_conditions.go:123] node cpu capacity is 8
	I0110 02:26:17.971140  333054 node_conditions.go:105] duration metric: took 2.756049ms to run NodePressure ...
	I0110 02:26:17.971152  333054 start.go:242] waiting for startup goroutines ...
	I0110 02:26:17.971166  333054 start.go:247] waiting for cluster config update ...
	I0110 02:26:17.971185  333054 start.go:256] writing updated cluster config ...
	I0110 02:26:17.971431  333054 ssh_runner.go:195] Run: rm -f paused
	I0110 02:26:17.975125  333054 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:17.978548  333054 pod_ready.go:83] waiting for pod "coredns-7d764666f9-rhgg5" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:26:19.983349  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:21.985141  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:23.364403  325613 pod_ready.go:94] pod "coredns-7d764666f9-lfdgm" is "Ready"
	I0110 02:26:23.364435  325613 pod_ready.go:86] duration metric: took 36.506559973s for pod "coredns-7d764666f9-lfdgm" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.367345  325613 pod_ready.go:83] waiting for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.371731  325613 pod_ready.go:94] pod "etcd-embed-certs-872415" is "Ready"
	I0110 02:26:23.371763  325613 pod_ready.go:86] duration metric: took 4.396045ms for pod "etcd-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.374001  325613 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.378543  325613 pod_ready.go:94] pod "kube-apiserver-embed-certs-872415" is "Ready"
	I0110 02:26:23.378563  325613 pod_ready.go:86] duration metric: took 4.542133ms for pod "kube-apiserver-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.380655  325613 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.562055  325613 pod_ready.go:94] pod "kube-controller-manager-embed-certs-872415" is "Ready"
	I0110 02:26:23.562084  325613 pod_ready.go:86] duration metric: took 181.404493ms for pod "kube-controller-manager-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:23.762713  325613 pod_ready.go:83] waiting for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:24.162842  325613 pod_ready.go:94] pod "kube-proxy-47n8d" is "Ready"
	I0110 02:26:24.162873  325613 pod_ready.go:86] duration metric: took 400.132834ms for pod "kube-proxy-47n8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:24.364210  325613 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:24.762428  325613 pod_ready.go:94] pod "kube-scheduler-embed-certs-872415" is "Ready"
	I0110 02:26:24.762456  325613 pod_ready.go:86] duration metric: took 398.220633ms for pod "kube-scheduler-embed-certs-872415" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:24.762473  325613 pod_ready.go:40] duration metric: took 37.908107093s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:24.818061  325613 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:26:24.819835  325613 out.go:179] * Done! kubectl is now configured to use "embed-certs-872415" cluster and "default" namespace by default
	W0110 02:26:22.854633  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:24.855141  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	W0110 02:26:23.985202  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:26.483457  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:27.354070  327170 pod_ready.go:104] pod "coredns-7d764666f9-xrkw6" is not "Ready", error: <nil>
	I0110 02:26:28.853518  327170 pod_ready.go:94] pod "coredns-7d764666f9-xrkw6" is "Ready"
	I0110 02:26:28.853541  327170 pod_ready.go:86] duration metric: took 38.00485446s for pod "coredns-7d764666f9-xrkw6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.855776  327170 pod_ready.go:83] waiting for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.859059  327170 pod_ready.go:94] pod "etcd-no-preload-190877" is "Ready"
	I0110 02:26:28.859077  327170 pod_ready.go:86] duration metric: took 3.283782ms for pod "etcd-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.860769  327170 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.863911  327170 pod_ready.go:94] pod "kube-apiserver-no-preload-190877" is "Ready"
	I0110 02:26:28.863928  327170 pod_ready.go:86] duration metric: took 3.138392ms for pod "kube-apiserver-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:28.865531  327170 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:29.051921  327170 pod_ready.go:94] pod "kube-controller-manager-no-preload-190877" is "Ready"
	I0110 02:26:29.051952  327170 pod_ready.go:86] duration metric: took 186.403273ms for pod "kube-controller-manager-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:29.251770  327170 pod_ready.go:83] waiting for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:29.651751  327170 pod_ready.go:94] pod "kube-proxy-hrztb" is "Ready"
	I0110 02:26:29.651782  327170 pod_ready.go:86] duration metric: took 399.975949ms for pod "kube-proxy-hrztb" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:29.852180  327170 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:30.252397  327170 pod_ready.go:94] pod "kube-scheduler-no-preload-190877" is "Ready"
	I0110 02:26:30.252424  327170 pod_ready.go:86] duration metric: took 400.217373ms for pod "kube-scheduler-no-preload-190877" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:30.252447  327170 pod_ready.go:40] duration metric: took 39.406842868s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:30.296441  327170 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:26:30.299390  327170 out.go:179] * Done! kubectl is now configured to use "no-preload-190877" cluster and "default" namespace by default
	W0110 02:26:28.983753  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:31.484422  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 02:25:59 old-k8s-version-188604 crio[574]: time="2026-01-10T02:25:59.90439991Z" level=info msg="Started container" PID=1779 containerID=894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper id=825a4d68-bd94-47be-8e2d-19ff3a7e36c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7061713ec27812cd2595f4d9823c8353c5ce05a2983b0531073cfa81e35681c2
	Jan 10 02:26:00 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:00.84700936Z" level=info msg="Removing container: ec6e54552dbb815ecf2c92ebe2982198f86e2bafa507070133556fc036dadff4" id=db10a1c9-a677-4d4f-b94f-c7f334e1f9e8 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:00 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:00.856262483Z" level=info msg="Removed container ec6e54552dbb815ecf2c92ebe2982198f86e2bafa507070133556fc036dadff4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper" id=db10a1c9-a677-4d4f-b94f-c7f334e1f9e8 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.871098853Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fddad4c2-b813-4d91-ab23-09d450c38bef name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.872012716Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c4c09957-dd63-483a-9d7d-0e646cd91874 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.873004145Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c4236c8b-9d46-479c-9829-cf731b8a41ed name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.873147755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.877665334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.877841316Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/92c3be459fff5f82fbfefc81122934eb344eb8d67c42913062baa4e802055b81/merged/etc/passwd: no such file or directory"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.877868256Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/92c3be459fff5f82fbfefc81122934eb344eb8d67c42913062baa4e802055b81/merged/etc/group: no such file or directory"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.878115443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.904377286Z" level=info msg="Created container 3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e: kube-system/storage-provisioner/storage-provisioner" id=c4236c8b-9d46-479c-9829-cf731b8a41ed name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.904870998Z" level=info msg="Starting container: 3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e" id=e84ba7d9-2425-46d7-a622-93ec781efc69 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:10 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:10.906553502Z" level=info msg="Started container" PID=1793 containerID=3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e description=kube-system/storage-provisioner/storage-provisioner id=e84ba7d9-2425-46d7-a622-93ec781efc69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d04ac23222b33ba2c42fee6a0a3e7b100eaadd4b852928775d3f51d0e27e16d8
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.761159615Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=39d641d6-07ec-4824-baaa-c3bb699fde8f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.762165892Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=925388ab-aa85-4a9c-b1df-d138c0ce9d5e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.763254942Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper" id=f3855ee6-6da8-4ae2-8837-fc74ef3bdacf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.763423742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.77009621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.770877812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.81180073Z" level=info msg="Created container 7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper" id=f3855ee6-6da8-4ae2-8837-fc74ef3bdacf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.812496925Z" level=info msg="Starting container: 7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50" id=b62937f1-cf05-4e10-a86e-2e4b4be440ae name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.814517026Z" level=info msg="Started container" PID=1812 containerID=7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper id=b62937f1-cf05-4e10-a86e-2e4b4be440ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=7061713ec27812cd2595f4d9823c8353c5ce05a2983b0531073cfa81e35681c2
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.888409348Z" level=info msg="Removing container: 894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870" id=f7fc5fa6-54aa-40e1-815f-041c10770e86 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:16 old-k8s-version-188604 crio[574]: time="2026-01-10T02:26:16.897544924Z" level=info msg="Removed container 894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79/dashboard-metrics-scraper" id=f7fc5fa6-54aa-40e1-815f-041c10770e86 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	7e64336c4e44a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   7061713ec2781       dashboard-metrics-scraper-5f989dc9cf-qgv79       kubernetes-dashboard
	3e46f34495823       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   d04ac23222b33       storage-provisioner                              kube-system
	63fa889860ac4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   febd3354b9f8c       kubernetes-dashboard-8694d4445c-lq5lf            kubernetes-dashboard
	3eef71caacb42       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   e2da0b126a67a       coredns-5dd5756b68-vc68c                         kube-system
	d4fa616c8db72       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   7ceaf155b9d71       busybox                                          default
	832e7427e079d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   79f349779dbd1       kindnet-25dkr                                    kube-system
	2994793acd647       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   d04ac23222b33       storage-provisioner                              kube-system
	acf5b5647b7a2       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   303924485e629       kube-proxy-c445q                                 kube-system
	861ce74c9faf0       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   c57bcead224d0       kube-scheduler-old-k8s-version-188604            kube-system
	c7891e84c7b07       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   d830b2ea5a725       kube-controller-manager-old-k8s-version-188604   kube-system
	a022cc94e780e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   ac3902f957dcf       kube-apiserver-old-k8s-version-188604            kube-system
	583aa40ef23fe       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   4d952430c221f       etcd-old-k8s-version-188604                      kube-system
	
	
	==> coredns [3eef71caacb4290f3264c8c7c1487a2a3a32057cebca2adde7fb5c9b5446e232] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43656 - 16341 "HINFO IN 3070078427942198497.177982554793797762. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.078035392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-188604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-188604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=old-k8s-version-188604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-188604
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:10 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:10 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:10 +0000   Sat, 10 Jan 2026 02:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:26:10 +0000   Sat, 10 Jan 2026 02:25:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-188604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                8835f89d-8806-4482-b07d-960e07e8dff0
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-vc68c                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-188604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-25dkr                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-188604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-188604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-c445q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-188604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-qgv79        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-lq5lf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-188604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node old-k8s-version-188604 event: Registered Node old-k8s-version-188604 in Controller
	  Normal  NodeReady                93s                kubelet          Node old-k8s-version-188604 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node old-k8s-version-188604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node old-k8s-version-188604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-188604 event: Registered Node old-k8s-version-188604 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [583aa40ef23feaf98f416116520b822f7fed26e3509ae9a5afe569be8de6ceff] <==
	{"level":"info","ts":"2026-01-10T02:25:37.370694Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:25:37.370728Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:25:37.370987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-10T02:25:37.371114Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2026-01-10T02:25:37.371273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:25:37.371443Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:25:37.376736Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T02:25:37.377022Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:25:37.377092Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:25:37.377202Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:37.377239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:38.65739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:38.657449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:38.657489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:38.657503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:38.657508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:38.657516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:38.657523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:38.658811Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:38.65883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:38.658814Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-188604 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:25:38.659049Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:38.659075Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:38.660141Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:25:38.660211Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 02:26:34 up  1:09,  0 user,  load average: 3.50, 3.51, 2.38
	Linux old-k8s-version-188604 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [832e7427e079dec6ed5e1274fdf2e96dc09e2cf11be39eb0ad4d7eb590ba7cb0] <==
	I0110 02:25:40.427646       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:25:40.427851       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:25:40.428028       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:25:40.428053       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:25:40.428069       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:25:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:25:40.725178       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:25:40.725212       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:25:40.725226       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:25:40.727069       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:25:41.025467       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:25:41.025558       1 metrics.go:72] Registering metrics
	I0110 02:25:41.025707       1 controller.go:711] "Syncing nftables rules"
	I0110 02:25:50.725994       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:25:50.726046       1 main.go:301] handling current node
	I0110 02:26:00.725956       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:26:00.725984       1 main.go:301] handling current node
	I0110 02:26:10.725014       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:26:10.725066       1 main.go:301] handling current node
	I0110 02:26:20.726614       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:26:20.726647       1 main.go:301] handling current node
	I0110 02:26:30.731697       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:26:30.731727       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a022cc94e780e8ed928e70a6eda0944c970eecd9d4d3e3af71a2fd593d685500] <==
	I0110 02:25:39.564074       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0110 02:25:39.664035       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 02:25:39.664090       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 02:25:39.664100       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 02:25:39.664119       1 aggregator.go:166] initial CRD sync complete...
	I0110 02:25:39.664126       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 02:25:39.664132       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:25:39.664137       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:25:39.664269       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:25:39.664275       1 shared_informer.go:318] Caches are synced for configmaps
	I0110 02:25:39.664295       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 02:25:39.664517       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 02:25:39.691242       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0110 02:25:39.699735       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:25:40.568361       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 02:25:40.644614       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 02:25:40.679714       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 02:25:40.698645       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:25:40.708862       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:25:40.720387       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 02:25:40.762165       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.128.110"}
	I0110 02:25:40.777911       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.221.81"}
	I0110 02:25:52.345792       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:25:52.350365       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 02:25:52.513373       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c7891e84c7b07fa814892b06d908aeaae4f1e237406a3b4e0c937ca6047439f5] <==
	I0110 02:25:52.524263       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-lq5lf"
	I0110 02:25:52.524330       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 02:25:52.526134       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-qgv79"
	I0110 02:25:52.528793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.205692ms"
	I0110 02:25:52.533438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.370765ms"
	I0110 02:25:52.537839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.004994ms"
	I0110 02:25:52.550057       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 02:25:52.550239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.754174ms"
	I0110 02:25:52.550316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.836µs"
	I0110 02:25:52.550392       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.737µs"
	I0110 02:25:52.551164       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.273949ms"
	I0110 02:25:52.551338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.832µs"
	I0110 02:25:52.558070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.404µs"
	I0110 02:25:52.869461       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:25:52.946352       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:25:52.946397       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 02:25:57.858163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.238279ms"
	I0110 02:25:57.858269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.575µs"
	I0110 02:25:59.866452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="280.113µs"
	I0110 02:26:00.856500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.116µs"
	I0110 02:26:01.893937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.32µs"
	I0110 02:26:16.492487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.081821ms"
	I0110 02:26:16.492703       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="167.395µs"
	I0110 02:26:16.899247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.362µs"
	I0110 02:26:22.849416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.262µs"
	
	
	==> kube-proxy [acf5b5647b7a260868d1a73059bea70c514cbda74b322acd6ecef0169e38684f] <==
	I0110 02:25:40.241685       1 server_others.go:69] "Using iptables proxy"
	I0110 02:25:40.259042       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0110 02:25:40.286904       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:25:40.291286       1 server_others.go:152] "Using iptables Proxier"
	I0110 02:25:40.291322       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 02:25:40.291332       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 02:25:40.291363       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 02:25:40.291640       1 server.go:846] "Version info" version="v1.28.0"
	I0110 02:25:40.291662       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:40.292237       1 config.go:97] "Starting endpoint slice config controller"
	I0110 02:25:40.292272       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 02:25:40.292355       1 config.go:188] "Starting service config controller"
	I0110 02:25:40.292360       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 02:25:40.292392       1 config.go:315] "Starting node config controller"
	I0110 02:25:40.292398       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 02:25:40.392416       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 02:25:40.392640       1 shared_informer.go:318] Caches are synced for service config
	I0110 02:25:40.392655       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [861ce74c9faf076868c60d47909834154c9a2f93ac74567527702fb1423497f3] <==
	W0110 02:25:39.630411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0110 02:25:39.630450       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0110 02:25:39.630551       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	E0110 02:25:39.630579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	W0110 02:25:39.630704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.630899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.630946       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0110 02:25:39.630968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0110 02:25:39.631596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.631629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.631805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.631828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.632054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0110 02:25:39.632082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.632081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0110 02:25:39.632099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.632370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.634221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0110 02:25:39.634037       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0110 02:25:39.634284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0110 02:25:39.634094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0110 02:25:39.634301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0110 02:25:39.634159       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0110 02:25:39.634317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	I0110 02:25:40.618813       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.534834     734 topology_manager.go:215] "Topology Admit Handler" podUID="092a7a28-4eb5-4624-b51a-a672142e3519" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-qgv79"
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.677329     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/44e589a7-3475-4e98-95fc-c5f990e17892-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-lq5lf\" (UID: \"44e589a7-3475-4e98-95fc-c5f990e17892\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lq5lf"
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.677438     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd5b5\" (UniqueName: \"kubernetes.io/projected/092a7a28-4eb5-4624-b51a-a672142e3519-kube-api-access-dd5b5\") pod \"dashboard-metrics-scraper-5f989dc9cf-qgv79\" (UID: \"092a7a28-4eb5-4624-b51a-a672142e3519\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79"
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.677494     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blts5\" (UniqueName: \"kubernetes.io/projected/44e589a7-3475-4e98-95fc-c5f990e17892-kube-api-access-blts5\") pod \"kubernetes-dashboard-8694d4445c-lq5lf\" (UID: \"44e589a7-3475-4e98-95fc-c5f990e17892\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lq5lf"
	Jan 10 02:25:52 old-k8s-version-188604 kubelet[734]: I0110 02:25:52.677557     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/092a7a28-4eb5-4624-b51a-a672142e3519-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-qgv79\" (UID: \"092a7a28-4eb5-4624-b51a-a672142e3519\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79"
	Jan 10 02:25:59 old-k8s-version-188604 kubelet[734]: I0110 02:25:59.842076     734 scope.go:117] "RemoveContainer" containerID="ec6e54552dbb815ecf2c92ebe2982198f86e2bafa507070133556fc036dadff4"
	Jan 10 02:25:59 old-k8s-version-188604 kubelet[734]: I0110 02:25:59.864031     734 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lq5lf" podStartSLOduration=3.732206032 podCreationTimestamp="2026-01-10 02:25:52 +0000 UTC" firstStartedPulling="2026-01-10 02:25:52.866499857 +0000 UTC m=+16.211790880" lastFinishedPulling="2026-01-10 02:25:56.99824478 +0000 UTC m=+20.343535795" observedRunningTime="2026-01-10 02:25:57.851388128 +0000 UTC m=+21.196679173" watchObservedRunningTime="2026-01-10 02:25:59.863950947 +0000 UTC m=+23.209242041"
	Jan 10 02:26:00 old-k8s-version-188604 kubelet[734]: I0110 02:26:00.845624     734 scope.go:117] "RemoveContainer" containerID="ec6e54552dbb815ecf2c92ebe2982198f86e2bafa507070133556fc036dadff4"
	Jan 10 02:26:00 old-k8s-version-188604 kubelet[734]: I0110 02:26:00.845807     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:00 old-k8s-version-188604 kubelet[734]: E0110 02:26:00.846185     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:01 old-k8s-version-188604 kubelet[734]: I0110 02:26:01.849071     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:01 old-k8s-version-188604 kubelet[734]: E0110 02:26:01.849406     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:02 old-k8s-version-188604 kubelet[734]: I0110 02:26:02.851711     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:02 old-k8s-version-188604 kubelet[734]: E0110 02:26:02.852027     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:10 old-k8s-version-188604 kubelet[734]: I0110 02:26:10.870586     734 scope.go:117] "RemoveContainer" containerID="2994793acd647dbf48fd7155eab6331a96f311accd8a9212a55f571d61b00119"
	Jan 10 02:26:16 old-k8s-version-188604 kubelet[734]: I0110 02:26:16.760472     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:16 old-k8s-version-188604 kubelet[734]: I0110 02:26:16.887249     734 scope.go:117] "RemoveContainer" containerID="894003cc30d5930d7a33c0d06533c1a0fda660421953f5278f380ff05ac83870"
	Jan 10 02:26:16 old-k8s-version-188604 kubelet[734]: I0110 02:26:16.887489     734 scope.go:117] "RemoveContainer" containerID="7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50"
	Jan 10 02:26:16 old-k8s-version-188604 kubelet[734]: E0110 02:26:16.887827     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:22 old-k8s-version-188604 kubelet[734]: I0110 02:26:22.837236     734 scope.go:117] "RemoveContainer" containerID="7e64336c4e44a56573000552b8c2588893d2f4b52182e1bd4e6f3d925a2aed50"
	Jan 10 02:26:22 old-k8s-version-188604 kubelet[734]: E0110 02:26:22.837693     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qgv79_kubernetes-dashboard(092a7a28-4eb5-4624-b51a-a672142e3519)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qgv79" podUID="092a7a28-4eb5-4624-b51a-a672142e3519"
	Jan 10 02:26:30 old-k8s-version-188604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:26:30 old-k8s-version-188604 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:26:30 old-k8s-version-188604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:26:30 old-k8s-version-188604 systemd[1]: kubelet.service: Consumed 1.522s CPU time.
	
	
	==> kubernetes-dashboard [63fa889860ac4c551c147aa893d7114cdb4799c26166f95c40f2c08d1a1f8641] <==
	2026/01/10 02:25:57 Using namespace: kubernetes-dashboard
	2026/01/10 02:25:57 Using in-cluster config to connect to apiserver
	2026/01/10 02:25:57 Using secret token for csrf signing
	2026/01/10 02:25:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:25:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:25:57 Successful initial request to the apiserver, version: v1.28.0
	2026/01/10 02:25:57 Generating JWE encryption key
	2026/01/10 02:25:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:25:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:25:57 Initializing JWE encryption key from synchronized object
	2026/01/10 02:25:57 Creating in-cluster Sidecar client
	2026/01/10 02:25:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:25:57 Serving insecurely on HTTP port: 9090
	2026/01/10 02:26:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:25:57 Starting overwatch
	
	
	==> storage-provisioner [2994793acd647dbf48fd7155eab6331a96f311accd8a9212a55f571d61b00119] <==
	I0110 02:25:40.197785       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:26:10.203111       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3e46f344958234df9ef143c8a3163f0aedd31947f4d549c1531bc7e7536d9a1e] <==
	I0110 02:26:10.917925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:26:10.925595       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:26:10.925641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 02:26:28.322468       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:26:28.322597       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9dd8f8c3-352c-4a42-bd82-a8d8489739cb", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-188604_fe0624cd-2670-4404-94e0-ce389174df7b became leader
	I0110 02:26:28.322641       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-188604_fe0624cd-2670-4404-94e0-ce389174df7b!
	I0110 02:26:28.422816       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-188604_fe0624cd-2670-4404-94e0-ce389174df7b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188604 -n old-k8s-version-188604
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-188604 -n old-k8s-version-188604: exit status 2 (320.979087ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-188604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-872415 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-872415 --alsologtostderr -v=1: exit status 80 (2.491947822s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-872415 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:26:36.582751  337858 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:36.583034  337858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:36.583044  337858 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:36.583047  337858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:36.583290  337858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:36.583593  337858 out.go:368] Setting JSON to false
	I0110 02:26:36.583612  337858 mustload.go:66] Loading cluster: embed-certs-872415
	I0110 02:26:36.584034  337858 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:36.584448  337858 cli_runner.go:164] Run: docker container inspect embed-certs-872415 --format={{.State.Status}}
	I0110 02:26:36.602472  337858 host.go:66] Checking if "embed-certs-872415" exists ...
	I0110 02:26:36.602762  337858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:36.661262  337858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 02:26:36.649875591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:36.661945  337858 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-872415 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:26:36.663629  337858 out.go:179] * Pausing node embed-certs-872415 ... 
	I0110 02:26:36.664676  337858 host.go:66] Checking if "embed-certs-872415" exists ...
	I0110 02:26:36.664938  337858 ssh_runner.go:195] Run: systemctl --version
	I0110 02:26:36.664976  337858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-872415
	I0110 02:26:36.684056  337858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/embed-certs-872415/id_rsa Username:docker}
	I0110 02:26:36.779825  337858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:36.793453  337858 pause.go:52] kubelet running: true
	I0110 02:26:36.793526  337858 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:36.966617  337858 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:36.966741  337858 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:37.036619  337858 cri.go:96] found id: "4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0"
	I0110 02:26:37.036637  337858 cri.go:96] found id: "11e07e750563655b9b9b68d1c1bd4f62c6c891c31f8f8b4f1b2aa5f35740f21c"
	I0110 02:26:37.036641  337858 cri.go:96] found id: "884c8e6bbbabba54b1caf0bf973c6a8e87573c415d5e41d4148ee26ce39d8a86"
	I0110 02:26:37.036644  337858 cri.go:96] found id: "c9ed6e5833a02b02188ebd5dc59c3905449d205507f883d7355a53640612992e"
	I0110 02:26:37.036648  337858 cri.go:96] found id: "f797a2ac7bed949129ae00ede5e882e04be3fbc68a018031d91617daa06c33fb"
	I0110 02:26:37.036653  337858 cri.go:96] found id: "c18d2a75e5522089147efcb8d2db17a6b9de91293257643b461a741df482d227"
	I0110 02:26:37.036656  337858 cri.go:96] found id: "a7aca2ea4ec4ec1d630947a0d365ab68519caa4f3c40d6e6853070fc4a4c003e"
	I0110 02:26:37.036659  337858 cri.go:96] found id: "3d674808892c3ae2356254e36c341b16b81993833f3dc3beac43dcafda7c7a22"
	I0110 02:26:37.036661  337858 cri.go:96] found id: "d1431bb51cdc7fa296b7eb50a379de29c5de265de5eb52ac0f23e940f0dd5766"
	I0110 02:26:37.036668  337858 cri.go:96] found id: "9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05"
	I0110 02:26:37.036671  337858 cri.go:96] found id: "735ed321afcb16724cf6cb26a8662a0bbc9c50672d51b165beb2d4fcc3243180"
	I0110 02:26:37.036674  337858 cri.go:96] found id: ""
	I0110 02:26:37.036715  337858 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:37.049784  337858 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:37Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:26:37.276054  337858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:37.309660  337858 pause.go:52] kubelet running: false
	I0110 02:26:37.309724  337858 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:37.480900  337858 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:37.480978  337858 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:37.551268  337858 cri.go:96] found id: "4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0"
	I0110 02:26:37.551290  337858 cri.go:96] found id: "11e07e750563655b9b9b68d1c1bd4f62c6c891c31f8f8b4f1b2aa5f35740f21c"
	I0110 02:26:37.551296  337858 cri.go:96] found id: "884c8e6bbbabba54b1caf0bf973c6a8e87573c415d5e41d4148ee26ce39d8a86"
	I0110 02:26:37.551300  337858 cri.go:96] found id: "c9ed6e5833a02b02188ebd5dc59c3905449d205507f883d7355a53640612992e"
	I0110 02:26:37.551304  337858 cri.go:96] found id: "f797a2ac7bed949129ae00ede5e882e04be3fbc68a018031d91617daa06c33fb"
	I0110 02:26:37.551309  337858 cri.go:96] found id: "c18d2a75e5522089147efcb8d2db17a6b9de91293257643b461a741df482d227"
	I0110 02:26:37.551314  337858 cri.go:96] found id: "a7aca2ea4ec4ec1d630947a0d365ab68519caa4f3c40d6e6853070fc4a4c003e"
	I0110 02:26:37.551318  337858 cri.go:96] found id: "3d674808892c3ae2356254e36c341b16b81993833f3dc3beac43dcafda7c7a22"
	I0110 02:26:37.551322  337858 cri.go:96] found id: "d1431bb51cdc7fa296b7eb50a379de29c5de265de5eb52ac0f23e940f0dd5766"
	I0110 02:26:37.551330  337858 cri.go:96] found id: "9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05"
	I0110 02:26:37.551335  337858 cri.go:96] found id: "735ed321afcb16724cf6cb26a8662a0bbc9c50672d51b165beb2d4fcc3243180"
	I0110 02:26:37.551339  337858 cri.go:96] found id: ""
	I0110 02:26:37.551398  337858 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:38.088046  337858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:38.101479  337858 pause.go:52] kubelet running: false
	I0110 02:26:38.101532  337858 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:38.260646  337858 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:38.260708  337858 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:38.334877  337858 cri.go:96] found id: "4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0"
	I0110 02:26:38.334910  337858 cri.go:96] found id: "11e07e750563655b9b9b68d1c1bd4f62c6c891c31f8f8b4f1b2aa5f35740f21c"
	I0110 02:26:38.334917  337858 cri.go:96] found id: "884c8e6bbbabba54b1caf0bf973c6a8e87573c415d5e41d4148ee26ce39d8a86"
	I0110 02:26:38.334922  337858 cri.go:96] found id: "c9ed6e5833a02b02188ebd5dc59c3905449d205507f883d7355a53640612992e"
	I0110 02:26:38.334926  337858 cri.go:96] found id: "f797a2ac7bed949129ae00ede5e882e04be3fbc68a018031d91617daa06c33fb"
	I0110 02:26:38.334931  337858 cri.go:96] found id: "c18d2a75e5522089147efcb8d2db17a6b9de91293257643b461a741df482d227"
	I0110 02:26:38.334936  337858 cri.go:96] found id: "a7aca2ea4ec4ec1d630947a0d365ab68519caa4f3c40d6e6853070fc4a4c003e"
	I0110 02:26:38.334941  337858 cri.go:96] found id: "3d674808892c3ae2356254e36c341b16b81993833f3dc3beac43dcafda7c7a22"
	I0110 02:26:38.334945  337858 cri.go:96] found id: "d1431bb51cdc7fa296b7eb50a379de29c5de265de5eb52ac0f23e940f0dd5766"
	I0110 02:26:38.334952  337858 cri.go:96] found id: "9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05"
	I0110 02:26:38.334961  337858 cri.go:96] found id: "735ed321afcb16724cf6cb26a8662a0bbc9c50672d51b165beb2d4fcc3243180"
	I0110 02:26:38.334965  337858 cri.go:96] found id: ""
	I0110 02:26:38.335001  337858 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:38.739789  337858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:38.753276  337858 pause.go:52] kubelet running: false
	I0110 02:26:38.753320  337858 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:38.916759  337858 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:38.916840  337858 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:38.998207  337858 cri.go:96] found id: "4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0"
	I0110 02:26:38.998228  337858 cri.go:96] found id: "11e07e750563655b9b9b68d1c1bd4f62c6c891c31f8f8b4f1b2aa5f35740f21c"
	I0110 02:26:38.998233  337858 cri.go:96] found id: "884c8e6bbbabba54b1caf0bf973c6a8e87573c415d5e41d4148ee26ce39d8a86"
	I0110 02:26:38.998237  337858 cri.go:96] found id: "c9ed6e5833a02b02188ebd5dc59c3905449d205507f883d7355a53640612992e"
	I0110 02:26:38.998239  337858 cri.go:96] found id: "f797a2ac7bed949129ae00ede5e882e04be3fbc68a018031d91617daa06c33fb"
	I0110 02:26:38.998243  337858 cri.go:96] found id: "c18d2a75e5522089147efcb8d2db17a6b9de91293257643b461a741df482d227"
	I0110 02:26:38.998246  337858 cri.go:96] found id: "a7aca2ea4ec4ec1d630947a0d365ab68519caa4f3c40d6e6853070fc4a4c003e"
	I0110 02:26:38.998251  337858 cri.go:96] found id: "3d674808892c3ae2356254e36c341b16b81993833f3dc3beac43dcafda7c7a22"
	I0110 02:26:38.998255  337858 cri.go:96] found id: "d1431bb51cdc7fa296b7eb50a379de29c5de265de5eb52ac0f23e940f0dd5766"
	I0110 02:26:38.998262  337858 cri.go:96] found id: "9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05"
	I0110 02:26:38.998267  337858 cri.go:96] found id: "735ed321afcb16724cf6cb26a8662a0bbc9c50672d51b165beb2d4fcc3243180"
	I0110 02:26:38.998271  337858 cri.go:96] found id: ""
	I0110 02:26:38.998317  337858 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:39.012333  337858 out.go:203] 
	W0110 02:26:39.014261  337858 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:26:39.014283  337858 out.go:285] * 
	* 
	W0110 02:26:39.016202  337858 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:26:39.017396  337858 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-872415 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-872415
helpers_test.go:244: (dbg) docker inspect embed-certs-872415:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30",
	        "Created": "2026-01-10T02:24:30.466412403Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 325928,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:25:36.802269873Z",
	            "FinishedAt": "2026-01-10T02:25:35.903596056Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/hosts",
	        "LogPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30-json.log",
	        "Name": "/embed-certs-872415",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-872415:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-872415",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30",
	                "LowerDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-872415",
	                "Source": "/var/lib/docker/volumes/embed-certs-872415/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-872415",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-872415",
	                "name.minikube.sigs.k8s.io": "embed-certs-872415",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d7b9578b2c83ef6e40f7b44ea9864f2497a62e0d742d5230627e6a281e44bb72",
	            "SandboxKey": "/var/run/docker/netns/d7b9578b2c83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-872415": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9ad01e33e846d95581fd92bc0e4f762a980374124d9ad12032ce9b9cc753743a",
	                    "EndpointID": "4f56b1115e38bf6c29495031a7d00c3afd244dab259732bddb30222a8e82288a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1e:04:07:e6:4d:cc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-872415",
	                        "5c3ed37b709e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872415 -n embed-certs-872415
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872415 -n embed-certs-872415: exit status 2 (354.474436ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-872415 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-872415 logs -n 25: (1.134362204s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-249405                                                                                                                                                                                                               │ disable-driver-mounts-249405 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p old-k8s-version-188604 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p embed-certs-872415 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p no-preload-190877 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-188604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-313784 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ embed-certs-872415 image list --format=json                                                                                                                                                                                                   │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p embed-certs-872415 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:38.395701  338461 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:38.395954  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.395962  338461 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:38.395966  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.396156  338461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:38.396626  338461 out.go:368] Setting JSON to false
	I0110 02:26:38.397992  338461 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4147,"bootTime":1768007851,"procs":455,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:38.398046  338461 start.go:143] virtualization: kvm guest
	I0110 02:26:38.399795  338461 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:38.400823  338461 notify.go:221] Checking for updates...
	I0110 02:26:38.400839  338461 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:38.401952  338461 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:38.403142  338461 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:38.404397  338461 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:38.405512  338461 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:38.406412  338461 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:38.407953  338461 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408047  338461 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408138  338461 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408217  338461 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:38.434056  338461 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:38.434192  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.492093  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.480726897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.492192  338461 docker.go:319] overlay module found
	I0110 02:26:38.493713  338461 out.go:179] * Using the docker driver based on user configuration
	I0110 02:26:38.494702  338461 start.go:309] selected driver: docker
	I0110 02:26:38.494716  338461 start.go:928] validating driver "docker" against <nil>
	I0110 02:26:38.494729  338461 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:38.495359  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.549669  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.540019441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.549849  338461 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 02:26:38.549882  338461 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 02:26:38.550158  338461 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:26:38.552024  338461 out.go:179] * Using Docker driver with root privileges
	I0110 02:26:38.553057  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:38.553113  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:38.553122  338461 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:26:38.553168  338461 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:38.554252  338461 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:26:38.555155  338461 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:38.556242  338461 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:38.557247  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:38.557276  338461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:38.557288  338461 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:38.557342  338461 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:38.557382  338461 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:38.557395  338461 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:38.557518  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:38.557546  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json: {Name:mk980e5e7d4c45bf0d1bdc96021cfe1dfa9563b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:38.578353  338461 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:38.578368  338461 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:38.578383  338461 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:38.578406  338461 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:38.578491  338461 start.go:364] duration metric: took 71.777µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:26:38.578513  338461 start.go:93] Provisioning new machine with config: &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:38.578574  338461 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Jan 10 02:26:04 embed-certs-872415 crio[568]: time="2026-01-10T02:26:04.098118175Z" level=info msg="Started container" PID=1798 containerID=095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper id=e4d0d8e6-5280-419c-89c7-d1ab8e2bb93a name=/runtime.v1.RuntimeService/StartContainer sandboxID=512eef4950fea400f797c59a73f1f06a472edd75532c9fd295c03d5f637fe03e
	Jan 10 02:26:04 embed-certs-872415 crio[568]: time="2026-01-10T02:26:04.133974696Z" level=info msg="Removing container: 603da5ca4e66f38abb3e5063922c81f6a799a04507e0d0a844c735f7fb2c65a2" id=cc712398-555b-49b3-9612-1c91545e695a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:04 embed-certs-872415 crio[568]: time="2026-01-10T02:26:04.142327753Z" level=info msg="Removed container 603da5ca4e66f38abb3e5063922c81f6a799a04507e0d0a844c735f7fb2c65a2: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper" id=cc712398-555b-49b3-9612-1c91545e695a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.164349575Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=567b8118-1db5-49dc-803a-b6f984819fc3 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.165308505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4f1a38e5-744f-474e-8fc2-697406951d8a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.166453487Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=32bf6123-347d-4a2a-b581-6b22b1b1ef55 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.166616532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.17064107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.170788235Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/11a5fa94fdddf9bc9b29823a6eb73f65c8d60a01adb6942c9595a3de6b7f2892/merged/etc/passwd: no such file or directory"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.17080962Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/11a5fa94fdddf9bc9b29823a6eb73f65c8d60a01adb6942c9595a3de6b7f2892/merged/etc/group: no such file or directory"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.171056199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.198739356Z" level=info msg="Created container 4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0: kube-system/storage-provisioner/storage-provisioner" id=32bf6123-347d-4a2a-b581-6b22b1b1ef55 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.199279048Z" level=info msg="Starting container: 4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0" id=0f515e1b-92c7-4ed0-ac2a-4cd9eed731db name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.201145373Z" level=info msg="Started container" PID=1812 containerID=4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0 description=kube-system/storage-provisioner/storage-provisioner id=0f515e1b-92c7-4ed0-ac2a-4cd9eed731db name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ba54aef90ed633dd28f1e0a4bec08820e464ad3dfa8cd70803b162150a1371f
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.052819067Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=853350ab-7a4a-44a1-8ecf-894e6eb41bf9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.053799345Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8930f42f-c278-4c65-8288-24257e110f04 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.054840502Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper" id=f6b7f5c3-b10d-4e29-a57f-85e89ddd5a45 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.054998458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.061134928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.061615475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.091090619Z" level=info msg="Created container 9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper" id=f6b7f5c3-b10d-4e29-a57f-85e89ddd5a45 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.091610877Z" level=info msg="Starting container: 9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05" id=b31fdea9-1f7a-449f-9e6c-e00ea6f24b88 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.093491772Z" level=info msg="Started container" PID=1848 containerID=9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper id=b31fdea9-1f7a-449f-9e6c-e00ea6f24b88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=512eef4950fea400f797c59a73f1f06a472edd75532c9fd295c03d5f637fe03e
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.203151011Z" level=info msg="Removing container: 095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b" id=873c6acb-dfb7-4b03-8f8b-66554b2fb0e1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.211922616Z" level=info msg="Removed container 095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper" id=873c6acb-dfb7-4b03-8f8b-66554b2fb0e1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9bc60773910d5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   512eef4950fea       dashboard-metrics-scraper-867fb5f87b-smt7d   kubernetes-dashboard
	4d4d91d3f535c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   9ba54aef90ed6       storage-provisioner                          kube-system
	735ed321afcb1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   d59dba4e9be36       kubernetes-dashboard-b84665fb8-jwghz         kubernetes-dashboard
	11e07e7505636       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           53 seconds ago      Running             kube-proxy                  0                   b31cd0177f714       kube-proxy-47n8d                             kube-system
	884c8e6bbbabb       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           53 seconds ago      Running             kindnet-cni                 0                   fbd161fa63444       kindnet-jkqz7                                kube-system
	c9ed6e5833a02       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           53 seconds ago      Running             coredns                     0                   1e2f74e103c9b       coredns-7d764666f9-lfdgm                     kube-system
	66793d85dbdab       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   675aef6455580       busybox                                      default
	f797a2ac7bed9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   9ba54aef90ed6       storage-provisioner                          kube-system
	c18d2a75e5522       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           56 seconds ago      Running             kube-apiserver              0                   8b47e3293aa94       kube-apiserver-embed-certs-872415            kube-system
	a7aca2ea4ec4e       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           56 seconds ago      Running             etcd                        0                   facbed6f956eb       etcd-embed-certs-872415                      kube-system
	3d674808892c3       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           56 seconds ago      Running             kube-scheduler              0                   cf10e621a4b2a       kube-scheduler-embed-certs-872415            kube-system
	d1431bb51cdc7       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           56 seconds ago      Running             kube-controller-manager     0                   e67e6b616f8d8       kube-controller-manager-embed-certs-872415   kube-system
	
	
	==> coredns [c9ed6e5833a02b02188ebd5dc59c3905449d205507f883d7355a53640612992e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37788 - 49743 "HINFO IN 7866328021580403239.2016529223586732471. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091892288s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-872415
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-872415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=embed-certs-872415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-872415
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:25 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:25 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:25 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:26:25 +0000   Sat, 10 Jan 2026 02:25:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-872415
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                2240aa56-47aa-4229-8a1a-8150a18d3a1e
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-lfdgm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-872415                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-jkqz7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-872415             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-872415    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-47n8d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-872415             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-smt7d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-jwghz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node embed-certs-872415 event: Registered Node embed-certs-872415 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node embed-certs-872415 event: Registered Node embed-certs-872415 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [a7aca2ea4ec4ec1d630947a0d365ab68519caa4f3c40d6e6853070fc4a4c003e] <==
	{"level":"info","ts":"2026-01-10T02:25:43.608786Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:43.608712Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:25:43.608752Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:25:43.608968Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2026-01-10T02:25:43.609079Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:25:43.609187Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:25:44.298161Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:44.298206Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:44.298251Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:44.298260Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:44.298274Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:44.298919Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:44.298952Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:44.298972Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:44.298987Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:44.299648Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-872415 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:25:44.299682Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:44.299654Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:44.300013Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:44.300042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:44.301066Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:44.301139Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:44.304623Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:25:44.304663Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2026-01-10T02:26:26.093091Z","caller":"traceutil/trace.go:172","msg":"trace[110582314] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"105.348213ms","start":"2026-01-10T02:26:25.987718Z","end":"2026-01-10T02:26:26.093066Z","steps":["trace[110582314] 'process raft request'  (duration: 74.649451ms)","trace[110582314] 'compare'  (duration: 30.562564ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:26:40 up  1:09,  0 user,  load average: 3.30, 3.46, 2.37
	Linux embed-certs-872415 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [884c8e6bbbabba54b1caf0bf973c6a8e87573c415d5e41d4148ee26ce39d8a86] <==
	I0110 02:25:46.851181       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:25:46.851419       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0110 02:25:46.851570       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:25:46.851594       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:25:46.851610       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:25:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:25:47.141850       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:25:47.141932       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:25:47.141947       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:25:47.143398       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:25:47.542042       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:25:47.542081       1 metrics.go:72] Registering metrics
	I0110 02:25:47.542156       1 controller.go:711] "Syncing nftables rules"
	I0110 02:25:57.051983       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:25:57.052055       1 main.go:301] handling current node
	I0110 02:26:07.051696       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:26:07.051757       1 main.go:301] handling current node
	I0110 02:26:17.051495       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:26:17.051552       1 main.go:301] handling current node
	I0110 02:26:27.050942       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:26:27.050996       1 main.go:301] handling current node
	I0110 02:26:37.056406       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:26:37.056447       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c18d2a75e5522089147efcb8d2db17a6b9de91293257643b461a741df482d227] <==
	I0110 02:25:45.300825       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:25:45.301008       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:25:45.301269       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:25:45.302153       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 02:25:45.302196       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:25:45.302206       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:25:45.302213       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:25:45.302219       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:25:45.308427       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:25:45.312784       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:45.312872       1 policy_source.go:248] refreshing policies
	I0110 02:25:45.312972       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0110 02:25:45.320389       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:25:45.331331       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:25:45.610411       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:25:45.639710       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:25:45.656222       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:25:45.662950       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:25:45.668583       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:25:45.697253       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.240.84"}
	I0110 02:25:45.707722       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.53.130"}
	I0110 02:25:46.204023       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:25:48.925416       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:25:48.976585       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:25:49.026711       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d1431bb51cdc7fa296b7eb50a379de29c5de265de5eb52ac0f23e940f0dd5766] <==
	I0110 02:25:48.476280       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477016       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477021       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477018       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-872415"
	I0110 02:25:48.477029       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477036       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477041       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478597       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477165       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478683       1 range_allocator.go:177] "Sending events to api server"
	I0110 02:25:48.476921       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478725       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 02:25:48.478760       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:48.478833       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478861       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.476699       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478624       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478402       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 02:25:48.479325       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.497434       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:48.500007       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.577020       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.577098       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:25:48.577111       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:25:48.597993       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [11e07e750563655b9b9b68d1c1bd4f62c6c891c31f8f8b4f1b2aa5f35740f21c] <==
	I0110 02:25:46.721371       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:25:46.778687       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:46.879284       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:46.879316       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0110 02:25:46.879419       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:25:46.897005       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:25:46.897083       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:25:46.902834       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:25:46.903372       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:25:46.903410       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:46.905297       1 config.go:309] "Starting node config controller"
	I0110 02:25:46.905323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:25:46.905498       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:25:46.905546       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:25:46.905589       1 config.go:200] "Starting service config controller"
	I0110 02:25:46.905595       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:25:46.905703       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:25:46.905738       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:25:47.005594       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:25:47.005634       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:25:47.006759       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:25:47.006779       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3d674808892c3ae2356254e36c341b16b81993833f3dc3beac43dcafda7c7a22] <==
	I0110 02:25:43.841207       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:25:45.241397       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:25:45.241431       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:25:45.241443       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:25:45.241454       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:25:45.296295       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:25:45.296342       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:45.299388       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:25:45.299425       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:45.299485       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:25:45.299558       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:25:45.400508       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:26:00 embed-certs-872415 kubelet[735]: E0110 02:26:00.085009     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-872415" containerName="kube-apiserver"
	Jan 10 02:26:00 embed-certs-872415 kubelet[735]: E0110 02:26:00.123547     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-872415" containerName="kube-apiserver"
	Jan 10 02:26:00 embed-certs-872415 kubelet[735]: E0110 02:26:00.155596     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-872415" containerName="kube-controller-manager"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: E0110 02:26:04.052038     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: I0110 02:26:04.052080     735 scope.go:122] "RemoveContainer" containerID="603da5ca4e66f38abb3e5063922c81f6a799a04507e0d0a844c735f7fb2c65a2"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: I0110 02:26:04.132737     735 scope.go:122] "RemoveContainer" containerID="603da5ca4e66f38abb3e5063922c81f6a799a04507e0d0a844c735f7fb2c65a2"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: E0110 02:26:04.133011     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: I0110 02:26:04.133051     735 scope.go:122] "RemoveContainer" containerID="095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: E0110 02:26:04.133245     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-smt7d_kubernetes-dashboard(63f8aee9-3575-4345-8493-0a47e115c43a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" podUID="63f8aee9-3575-4345-8493-0a47e115c43a"
	Jan 10 02:26:06 embed-certs-872415 kubelet[735]: E0110 02:26:06.972297     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:06 embed-certs-872415 kubelet[735]: I0110 02:26:06.972333     735 scope.go:122] "RemoveContainer" containerID="095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b"
	Jan 10 02:26:06 embed-certs-872415 kubelet[735]: E0110 02:26:06.972501     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-smt7d_kubernetes-dashboard(63f8aee9-3575-4345-8493-0a47e115c43a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" podUID="63f8aee9-3575-4345-8493-0a47e115c43a"
	Jan 10 02:26:17 embed-certs-872415 kubelet[735]: I0110 02:26:17.163864     735 scope.go:122] "RemoveContainer" containerID="f797a2ac7bed949129ae00ede5e882e04be3fbc68a018031d91617daa06c33fb"
	Jan 10 02:26:22 embed-certs-872415 kubelet[735]: E0110 02:26:22.880361     735 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lfdgm" containerName="coredns"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: E0110 02:26:31.052285     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: I0110 02:26:31.052322     735 scope.go:122] "RemoveContainer" containerID="095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: I0110 02:26:31.201814     735 scope.go:122] "RemoveContainer" containerID="095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: E0110 02:26:31.202062     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: I0110 02:26:31.202105     735 scope.go:122] "RemoveContainer" containerID="9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: E0110 02:26:31.202319     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-smt7d_kubernetes-dashboard(63f8aee9-3575-4345-8493-0a47e115c43a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" podUID="63f8aee9-3575-4345-8493-0a47e115c43a"
	Jan 10 02:26:36 embed-certs-872415 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:26:36 embed-certs-872415 kubelet[735]: I0110 02:26:36.938836     735 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:26:36 embed-certs-872415 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:26:36 embed-certs-872415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:26:36 embed-certs-872415 systemd[1]: kubelet.service: Consumed 1.674s CPU time.
	
	
	==> kubernetes-dashboard [735ed321afcb16724cf6cb26a8662a0bbc9c50672d51b165beb2d4fcc3243180] <==
	2026/01/10 02:25:56 Starting overwatch
	2026/01/10 02:25:56 Using namespace: kubernetes-dashboard
	2026/01/10 02:25:56 Using in-cluster config to connect to apiserver
	2026/01/10 02:25:56 Using secret token for csrf signing
	2026/01/10 02:25:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:25:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:25:56 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:25:56 Generating JWE encryption key
	2026/01/10 02:25:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:25:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:25:56 Initializing JWE encryption key from synchronized object
	2026/01/10 02:25:56 Creating in-cluster Sidecar client
	2026/01/10 02:25:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:25:56 Serving insecurely on HTTP port: 9090
	2026/01/10 02:26:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0] <==
	I0110 02:26:17.213994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:26:17.224931       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:26:17.224997       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:26:17.227829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:20.683367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:24.945074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:28.543262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:31.597686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:34.620827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:34.625741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:26:34.625943       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:26:34.626070       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d58c82c7-cbb5-4fa4-bce6-ee7de4cc80bf", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-872415_3ffebebb-336d-4901-a7e4-a127a15ed255 became leader
	I0110 02:26:34.626131       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-872415_3ffebebb-336d-4901-a7e4-a127a15ed255!
	W0110 02:26:34.628266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:34.631277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:26:34.726743       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-872415_3ffebebb-336d-4901-a7e4-a127a15ed255!
	W0110 02:26:36.635645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:36.640072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:38.643311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:38.647110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f797a2ac7bed949129ae00ede5e882e04be3fbc68a018031d91617daa06c33fb] <==
	I0110 02:25:46.407758       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:26:16.411453       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-872415 -n embed-certs-872415
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-872415 -n embed-certs-872415: exit status 2 (352.84257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-872415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-872415
helpers_test.go:244: (dbg) docker inspect embed-certs-872415:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30",
	        "Created": "2026-01-10T02:24:30.466412403Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 325928,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:25:36.802269873Z",
	            "FinishedAt": "2026-01-10T02:25:35.903596056Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/hosts",
	        "LogPath": "/var/lib/docker/containers/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30/5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30-json.log",
	        "Name": "/embed-certs-872415",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-872415:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-872415",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c3ed37b709ed1883eddcfd8afebb8c16f477283866dc2c0302cc9f89730fa30",
	                "LowerDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fb627c4af9c7a63e9c44b9f3b4344704262dd27d1a7a95374956ea777eada93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-872415",
	                "Source": "/var/lib/docker/volumes/embed-certs-872415/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-872415",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-872415",
	                "name.minikube.sigs.k8s.io": "embed-certs-872415",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d7b9578b2c83ef6e40f7b44ea9864f2497a62e0d742d5230627e6a281e44bb72",
	            "SandboxKey": "/var/run/docker/netns/d7b9578b2c83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-872415": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9ad01e33e846d95581fd92bc0e4f762a980374124d9ad12032ce9b9cc753743a",
	                    "EndpointID": "4f56b1115e38bf6c29495031a7d00c3afd244dab259732bddb30222a8e82288a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1e:04:07:e6:4d:cc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-872415",
	                        "5c3ed37b709e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872415 -n embed-certs-872415
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872415 -n embed-certs-872415: exit status 2 (338.996845ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-872415 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-872415 logs -n 25: (2.311911454s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-249405                                                                                                                                                                                                               │ disable-driver-mounts-249405 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-188604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p old-k8s-version-188604 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p embed-certs-872415 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p no-preload-190877 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-188604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-313784 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ embed-certs-872415 image list --format=json                                                                                                                                                                                                   │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p embed-certs-872415 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:38.395701  338461 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:38.395954  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.395962  338461 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:38.395966  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.396156  338461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:38.396626  338461 out.go:368] Setting JSON to false
	I0110 02:26:38.397992  338461 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4147,"bootTime":1768007851,"procs":455,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:38.398046  338461 start.go:143] virtualization: kvm guest
	I0110 02:26:38.399795  338461 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:38.400823  338461 notify.go:221] Checking for updates...
	I0110 02:26:38.400839  338461 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:38.401952  338461 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:38.403142  338461 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:38.404397  338461 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:38.405512  338461 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:38.406412  338461 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:38.407953  338461 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408047  338461 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408138  338461 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408217  338461 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:38.434056  338461 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:38.434192  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.492093  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.480726897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.492192  338461 docker.go:319] overlay module found
	I0110 02:26:38.493713  338461 out.go:179] * Using the docker driver based on user configuration
	I0110 02:26:38.494702  338461 start.go:309] selected driver: docker
	I0110 02:26:38.494716  338461 start.go:928] validating driver "docker" against <nil>
	I0110 02:26:38.494729  338461 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:38.495359  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.549669  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.540019441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.549849  338461 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 02:26:38.549882  338461 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 02:26:38.550158  338461 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:26:38.552024  338461 out.go:179] * Using Docker driver with root privileges
	I0110 02:26:38.553057  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:38.553113  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:38.553122  338461 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:26:38.553168  338461 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:38.554252  338461 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:26:38.555155  338461 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:38.556242  338461 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:38.557247  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:38.557276  338461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:38.557288  338461 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:38.557342  338461 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:38.557382  338461 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:38.557395  338461 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:38.557518  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:38.557546  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json: {Name:mk980e5e7d4c45bf0d1bdc96021cfe1dfa9563b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:38.578353  338461 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:38.578368  338461 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:38.578383  338461 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:38.578406  338461 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:38.578491  338461 start.go:364] duration metric: took 71.777µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:26:38.578513  338461 start.go:93] Provisioning new machine with config: &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:38.578574  338461 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Jan 10 02:26:04 embed-certs-872415 crio[568]: time="2026-01-10T02:26:04.098118175Z" level=info msg="Started container" PID=1798 containerID=095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper id=e4d0d8e6-5280-419c-89c7-d1ab8e2bb93a name=/runtime.v1.RuntimeService/StartContainer sandboxID=512eef4950fea400f797c59a73f1f06a472edd75532c9fd295c03d5f637fe03e
	Jan 10 02:26:04 embed-certs-872415 crio[568]: time="2026-01-10T02:26:04.133974696Z" level=info msg="Removing container: 603da5ca4e66f38abb3e5063922c81f6a799a04507e0d0a844c735f7fb2c65a2" id=cc712398-555b-49b3-9612-1c91545e695a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:04 embed-certs-872415 crio[568]: time="2026-01-10T02:26:04.142327753Z" level=info msg="Removed container 603da5ca4e66f38abb3e5063922c81f6a799a04507e0d0a844c735f7fb2c65a2: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper" id=cc712398-555b-49b3-9612-1c91545e695a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.164349575Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=567b8118-1db5-49dc-803a-b6f984819fc3 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.165308505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4f1a38e5-744f-474e-8fc2-697406951d8a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.166453487Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=32bf6123-347d-4a2a-b581-6b22b1b1ef55 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.166616532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.17064107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.170788235Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/11a5fa94fdddf9bc9b29823a6eb73f65c8d60a01adb6942c9595a3de6b7f2892/merged/etc/passwd: no such file or directory"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.17080962Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/11a5fa94fdddf9bc9b29823a6eb73f65c8d60a01adb6942c9595a3de6b7f2892/merged/etc/group: no such file or directory"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.171056199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.198739356Z" level=info msg="Created container 4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0: kube-system/storage-provisioner/storage-provisioner" id=32bf6123-347d-4a2a-b581-6b22b1b1ef55 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.199279048Z" level=info msg="Starting container: 4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0" id=0f515e1b-92c7-4ed0-ac2a-4cd9eed731db name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:17 embed-certs-872415 crio[568]: time="2026-01-10T02:26:17.201145373Z" level=info msg="Started container" PID=1812 containerID=4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0 description=kube-system/storage-provisioner/storage-provisioner id=0f515e1b-92c7-4ed0-ac2a-4cd9eed731db name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ba54aef90ed633dd28f1e0a4bec08820e464ad3dfa8cd70803b162150a1371f
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.052819067Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=853350ab-7a4a-44a1-8ecf-894e6eb41bf9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.053799345Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8930f42f-c278-4c65-8288-24257e110f04 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.054840502Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper" id=f6b7f5c3-b10d-4e29-a57f-85e89ddd5a45 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.054998458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.061134928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.061615475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.091090619Z" level=info msg="Created container 9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper" id=f6b7f5c3-b10d-4e29-a57f-85e89ddd5a45 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.091610877Z" level=info msg="Starting container: 9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05" id=b31fdea9-1f7a-449f-9e6c-e00ea6f24b88 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.093491772Z" level=info msg="Started container" PID=1848 containerID=9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper id=b31fdea9-1f7a-449f-9e6c-e00ea6f24b88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=512eef4950fea400f797c59a73f1f06a472edd75532c9fd295c03d5f637fe03e
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.203151011Z" level=info msg="Removing container: 095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b" id=873c6acb-dfb7-4b03-8f8b-66554b2fb0e1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:31 embed-certs-872415 crio[568]: time="2026-01-10T02:26:31.211922616Z" level=info msg="Removed container 095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d/dashboard-metrics-scraper" id=873c6acb-dfb7-4b03-8f8b-66554b2fb0e1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9bc60773910d5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   512eef4950fea       dashboard-metrics-scraper-867fb5f87b-smt7d   kubernetes-dashboard
	4d4d91d3f535c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   9ba54aef90ed6       storage-provisioner                          kube-system
	735ed321afcb1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   d59dba4e9be36       kubernetes-dashboard-b84665fb8-jwghz         kubernetes-dashboard
	11e07e7505636       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           56 seconds ago      Running             kube-proxy                  0                   b31cd0177f714       kube-proxy-47n8d                             kube-system
	884c8e6bbbabb       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           56 seconds ago      Running             kindnet-cni                 0                   fbd161fa63444       kindnet-jkqz7                                kube-system
	c9ed6e5833a02       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   1e2f74e103c9b       coredns-7d764666f9-lfdgm                     kube-system
	66793d85dbdab       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   675aef6455580       busybox                                      default
	f797a2ac7bed9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   9ba54aef90ed6       storage-provisioner                          kube-system
	c18d2a75e5522       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           59 seconds ago      Running             kube-apiserver              0                   8b47e3293aa94       kube-apiserver-embed-certs-872415            kube-system
	a7aca2ea4ec4e       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           59 seconds ago      Running             etcd                        0                   facbed6f956eb       etcd-embed-certs-872415                      kube-system
	3d674808892c3       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           59 seconds ago      Running             kube-scheduler              0                   cf10e621a4b2a       kube-scheduler-embed-certs-872415            kube-system
	d1431bb51cdc7       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           59 seconds ago      Running             kube-controller-manager     0                   e67e6b616f8d8       kube-controller-manager-embed-certs-872415   kube-system
	
	
	==> coredns [c9ed6e5833a02b02188ebd5dc59c3905449d205507f883d7355a53640612992e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37788 - 49743 "HINFO IN 7866328021580403239.2016529223586732471. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091892288s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-872415
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-872415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=embed-certs-872415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-872415
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:25 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:25 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:25 +0000   Sat, 10 Jan 2026 02:24:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:26:25 +0000   Sat, 10 Jan 2026 02:25:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-872415
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                2240aa56-47aa-4229-8a1a-8150a18d3a1e
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-7d764666f9-lfdgm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-embed-certs-872415                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-jkqz7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-embed-certs-872415             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-embed-certs-872415    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-47n8d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-embed-certs-872415             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-smt7d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-jwghz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node embed-certs-872415 event: Registered Node embed-certs-872415 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node embed-certs-872415 event: Registered Node embed-certs-872415 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [a7aca2ea4ec4ec1d630947a0d365ab68519caa4f3c40d6e6853070fc4a4c003e] <==
	{"level":"info","ts":"2026-01-10T02:25:43.608786Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:43.608712Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:25:43.608752Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:25:43.608968Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2026-01-10T02:25:43.609079Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:25:43.609187Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:25:44.298161Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:44.298206Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:44.298251Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:44.298260Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:44.298274Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:44.298919Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:44.298952Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:44.298972Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:44.298987Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:44.299648Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-872415 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:25:44.299682Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:44.299654Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:44.300013Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:44.300042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:44.301066Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:44.301139Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:44.304623Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:25:44.304663Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2026-01-10T02:26:26.093091Z","caller":"traceutil/trace.go:172","msg":"trace[110582314] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"105.348213ms","start":"2026-01-10T02:26:25.987718Z","end":"2026-01-10T02:26:26.093066Z","steps":["trace[110582314] 'process raft request'  (duration: 74.649451ms)","trace[110582314] 'compare'  (duration: 30.562564ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:26:43 up  1:09,  0 user,  load average: 3.43, 3.49, 2.38
	Linux embed-certs-872415 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [884c8e6bbbabba54b1caf0bf973c6a8e87573c415d5e41d4148ee26ce39d8a86] <==
	I0110 02:25:46.851181       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:25:46.851419       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0110 02:25:46.851570       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:25:46.851594       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:25:46.851610       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:25:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:25:47.141850       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:25:47.141932       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:25:47.141947       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:25:47.143398       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:25:47.542042       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:25:47.542081       1 metrics.go:72] Registering metrics
	I0110 02:25:47.542156       1 controller.go:711] "Syncing nftables rules"
	I0110 02:25:57.051983       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:25:57.052055       1 main.go:301] handling current node
	I0110 02:26:07.051696       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:26:07.051757       1 main.go:301] handling current node
	I0110 02:26:17.051495       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:26:17.051552       1 main.go:301] handling current node
	I0110 02:26:27.050942       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:26:27.050996       1 main.go:301] handling current node
	I0110 02:26:37.056406       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 02:26:37.056447       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c18d2a75e5522089147efcb8d2db17a6b9de91293257643b461a741df482d227] <==
	I0110 02:25:45.300825       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:25:45.301008       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:25:45.301269       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:25:45.302153       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 02:25:45.302196       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:25:45.302206       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:25:45.302213       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:25:45.302219       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:25:45.308427       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:25:45.312784       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:45.312872       1 policy_source.go:248] refreshing policies
	I0110 02:25:45.312972       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0110 02:25:45.320389       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:25:45.331331       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:25:45.610411       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:25:45.639710       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:25:45.656222       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:25:45.662950       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:25:45.668583       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:25:45.697253       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.240.84"}
	I0110 02:25:45.707722       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.53.130"}
	I0110 02:25:46.204023       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:25:48.925416       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:25:48.976585       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:25:49.026711       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d1431bb51cdc7fa296b7eb50a379de29c5de265de5eb52ac0f23e940f0dd5766] <==
	I0110 02:25:48.476280       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477016       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477021       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477018       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-872415"
	I0110 02:25:48.477029       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477036       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477041       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478597       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.477165       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478683       1 range_allocator.go:177] "Sending events to api server"
	I0110 02:25:48.476921       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478725       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 02:25:48.478760       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:48.478833       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478861       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.476699       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478624       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.478402       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 02:25:48.479325       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.497434       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:48.500007       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.577020       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:48.577098       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:25:48.577111       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:25:48.597993       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [11e07e750563655b9b9b68d1c1bd4f62c6c891c31f8f8b4f1b2aa5f35740f21c] <==
	I0110 02:25:46.721371       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:25:46.778687       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:46.879284       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:46.879316       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0110 02:25:46.879419       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:25:46.897005       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:25:46.897083       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:25:46.902834       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:25:46.903372       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:25:46.903410       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:46.905297       1 config.go:309] "Starting node config controller"
	I0110 02:25:46.905323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:25:46.905498       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:25:46.905546       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:25:46.905589       1 config.go:200] "Starting service config controller"
	I0110 02:25:46.905595       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:25:46.905703       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:25:46.905738       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:25:47.005594       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:25:47.005634       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:25:47.006759       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:25:47.006779       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3d674808892c3ae2356254e36c341b16b81993833f3dc3beac43dcafda7c7a22] <==
	I0110 02:25:43.841207       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:25:45.241397       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:25:45.241431       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:25:45.241443       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:25:45.241454       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:25:45.296295       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:25:45.296342       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:45.299388       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:25:45.299425       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:45.299485       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:25:45.299558       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:25:45.400508       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:26:00 embed-certs-872415 kubelet[735]: E0110 02:26:00.085009     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-872415" containerName="kube-apiserver"
	Jan 10 02:26:00 embed-certs-872415 kubelet[735]: E0110 02:26:00.123547     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-872415" containerName="kube-apiserver"
	Jan 10 02:26:00 embed-certs-872415 kubelet[735]: E0110 02:26:00.155596     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-872415" containerName="kube-controller-manager"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: E0110 02:26:04.052038     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: I0110 02:26:04.052080     735 scope.go:122] "RemoveContainer" containerID="603da5ca4e66f38abb3e5063922c81f6a799a04507e0d0a844c735f7fb2c65a2"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: I0110 02:26:04.132737     735 scope.go:122] "RemoveContainer" containerID="603da5ca4e66f38abb3e5063922c81f6a799a04507e0d0a844c735f7fb2c65a2"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: E0110 02:26:04.133011     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: I0110 02:26:04.133051     735 scope.go:122] "RemoveContainer" containerID="095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b"
	Jan 10 02:26:04 embed-certs-872415 kubelet[735]: E0110 02:26:04.133245     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-smt7d_kubernetes-dashboard(63f8aee9-3575-4345-8493-0a47e115c43a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" podUID="63f8aee9-3575-4345-8493-0a47e115c43a"
	Jan 10 02:26:06 embed-certs-872415 kubelet[735]: E0110 02:26:06.972297     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:06 embed-certs-872415 kubelet[735]: I0110 02:26:06.972333     735 scope.go:122] "RemoveContainer" containerID="095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b"
	Jan 10 02:26:06 embed-certs-872415 kubelet[735]: E0110 02:26:06.972501     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-smt7d_kubernetes-dashboard(63f8aee9-3575-4345-8493-0a47e115c43a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" podUID="63f8aee9-3575-4345-8493-0a47e115c43a"
	Jan 10 02:26:17 embed-certs-872415 kubelet[735]: I0110 02:26:17.163864     735 scope.go:122] "RemoveContainer" containerID="f797a2ac7bed949129ae00ede5e882e04be3fbc68a018031d91617daa06c33fb"
	Jan 10 02:26:22 embed-certs-872415 kubelet[735]: E0110 02:26:22.880361     735 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lfdgm" containerName="coredns"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: E0110 02:26:31.052285     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: I0110 02:26:31.052322     735 scope.go:122] "RemoveContainer" containerID="095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: I0110 02:26:31.201814     735 scope.go:122] "RemoveContainer" containerID="095768ca15f9d94401f909c950a25cdfb9224e33908f9a7946b6c04567bf3e9b"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: E0110 02:26:31.202062     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: I0110 02:26:31.202105     735 scope.go:122] "RemoveContainer" containerID="9bc60773910d519e5577544801cd020a4c1d86909ad14a2243300b499be89a05"
	Jan 10 02:26:31 embed-certs-872415 kubelet[735]: E0110 02:26:31.202319     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-smt7d_kubernetes-dashboard(63f8aee9-3575-4345-8493-0a47e115c43a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-smt7d" podUID="63f8aee9-3575-4345-8493-0a47e115c43a"
	Jan 10 02:26:36 embed-certs-872415 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:26:36 embed-certs-872415 kubelet[735]: I0110 02:26:36.938836     735 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:26:36 embed-certs-872415 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:26:36 embed-certs-872415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:26:36 embed-certs-872415 systemd[1]: kubelet.service: Consumed 1.674s CPU time.
	
	
	==> kubernetes-dashboard [735ed321afcb16724cf6cb26a8662a0bbc9c50672d51b165beb2d4fcc3243180] <==
	2026/01/10 02:25:56 Starting overwatch
	2026/01/10 02:25:56 Using namespace: kubernetes-dashboard
	2026/01/10 02:25:56 Using in-cluster config to connect to apiserver
	2026/01/10 02:25:56 Using secret token for csrf signing
	2026/01/10 02:25:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:25:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:25:56 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:25:56 Generating JWE encryption key
	2026/01/10 02:25:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:25:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:25:56 Initializing JWE encryption key from synchronized object
	2026/01/10 02:25:56 Creating in-cluster Sidecar client
	2026/01/10 02:25:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:25:56 Serving insecurely on HTTP port: 9090
	2026/01/10 02:26:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4d4d91d3f535c7b9604e9b795683707c771f528303e85b1487e9d5dcc788a5a0] <==
	I0110 02:26:17.213994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:26:17.224931       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:26:17.224997       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:26:17.227829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:20.683367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:24.945074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:28.543262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:31.597686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:34.620827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:34.625741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:26:34.625943       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:26:34.626070       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d58c82c7-cbb5-4fa4-bce6-ee7de4cc80bf", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-872415_3ffebebb-336d-4901-a7e4-a127a15ed255 became leader
	I0110 02:26:34.626131       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-872415_3ffebebb-336d-4901-a7e4-a127a15ed255!
	W0110 02:26:34.628266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:34.631277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:26:34.726743       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-872415_3ffebebb-336d-4901-a7e4-a127a15ed255!
	W0110 02:26:36.635645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:36.640072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:38.643311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:38.647110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:40.651128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:40.657098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:42.660472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:42.734999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f797a2ac7bed949129ae00ede5e882e04be3fbc68a018031d91617daa06c33fb] <==
	I0110 02:25:46.407758       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:26:16.411453       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-872415 -n embed-certs-872415
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-872415 -n embed-certs-872415: exit status 2 (380.230245ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-872415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-190877 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-190877 --alsologtostderr -v=1: exit status 80 (2.716077889s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-190877 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:26:42.154261  339881 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:42.154375  339881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:42.154384  339881 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:42.154389  339881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:42.154619  339881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:42.154881  339881 out.go:368] Setting JSON to false
	I0110 02:26:42.154918  339881 mustload.go:66] Loading cluster: no-preload-190877
	I0110 02:26:42.155283  339881 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:42.155801  339881 cli_runner.go:164] Run: docker container inspect no-preload-190877 --format={{.State.Status}}
	I0110 02:26:42.174767  339881 host.go:66] Checking if "no-preload-190877" exists ...
	I0110 02:26:42.175041  339881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:42.234880  339881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:80 SystemTime:2026-01-10 02:26:42.224711696 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:42.235550  339881 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-190877 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:26:42.393725  339881 out.go:179] * Pausing node no-preload-190877 ... 
	I0110 02:26:42.534722  339881 host.go:66] Checking if "no-preload-190877" exists ...
	I0110 02:26:42.535135  339881 ssh_runner.go:195] Run: systemctl --version
	I0110 02:26:42.535178  339881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-190877
	I0110 02:26:42.554044  339881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/no-preload-190877/id_rsa Username:docker}
	I0110 02:26:42.645278  339881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:42.664257  339881 pause.go:52] kubelet running: true
	I0110 02:26:42.664332  339881 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:42.832542  339881 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:42.832621  339881 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:42.899506  339881 cri.go:96] found id: "d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d"
	I0110 02:26:42.899527  339881 cri.go:96] found id: "79048f816d46d1e234f04cdab6aadfd5b104a8ee6bf21b0051f30d8b57b09a60"
	I0110 02:26:42.899531  339881 cri.go:96] found id: "76b971ba08e5b5ff2201aff088c59de43f57741f6fadbf6e9bfc040609df53a3"
	I0110 02:26:42.899534  339881 cri.go:96] found id: "0e6a52db2dcfcfef532da1ce2dfb3828fd0f341ff1ac04c62c9c7597a7e5b0bb"
	I0110 02:26:42.899538  339881 cri.go:96] found id: "22dfa9178e0c2dabea7eeb2af3d49aacace8b6745d6dfb7c7d46775997590b14"
	I0110 02:26:42.899543  339881 cri.go:96] found id: "6ad7afd00fb45f713bc2a231314f18f547e221ac07c9582f185c8dff172c458a"
	I0110 02:26:42.899548  339881 cri.go:96] found id: "f9119f08da7d53c43f8344b07645c1ff5515e403a8b6a95b251708f15accb6e0"
	I0110 02:26:42.899553  339881 cri.go:96] found id: "577d187f5e859dca2b5e47fdbe503d26687fcc21697de51827c8e09a3554993c"
	I0110 02:26:42.899557  339881 cri.go:96] found id: "f9fd815214df519a24be93449738661909fed82f445d131a65a8612d71e272f5"
	I0110 02:26:42.899568  339881 cri.go:96] found id: "8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c"
	I0110 02:26:42.899574  339881 cri.go:96] found id: "a3702be271010b730a0b039b595f580aade0d0fada0f4c3df45c272ac1f72362"
	I0110 02:26:42.899578  339881 cri.go:96] found id: ""
	I0110 02:26:42.899625  339881 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:42.911044  339881 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:42Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:26:43.241637  339881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:43.258357  339881 pause.go:52] kubelet running: false
	I0110 02:26:43.258414  339881 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:43.431930  339881 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:43.431993  339881 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:43.511589  339881 cri.go:96] found id: "d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d"
	I0110 02:26:43.511613  339881 cri.go:96] found id: "79048f816d46d1e234f04cdab6aadfd5b104a8ee6bf21b0051f30d8b57b09a60"
	I0110 02:26:43.511618  339881 cri.go:96] found id: "76b971ba08e5b5ff2201aff088c59de43f57741f6fadbf6e9bfc040609df53a3"
	I0110 02:26:43.511624  339881 cri.go:96] found id: "0e6a52db2dcfcfef532da1ce2dfb3828fd0f341ff1ac04c62c9c7597a7e5b0bb"
	I0110 02:26:43.511628  339881 cri.go:96] found id: "22dfa9178e0c2dabea7eeb2af3d49aacace8b6745d6dfb7c7d46775997590b14"
	I0110 02:26:43.511633  339881 cri.go:96] found id: "6ad7afd00fb45f713bc2a231314f18f547e221ac07c9582f185c8dff172c458a"
	I0110 02:26:43.511637  339881 cri.go:96] found id: "f9119f08da7d53c43f8344b07645c1ff5515e403a8b6a95b251708f15accb6e0"
	I0110 02:26:43.511641  339881 cri.go:96] found id: "577d187f5e859dca2b5e47fdbe503d26687fcc21697de51827c8e09a3554993c"
	I0110 02:26:43.511646  339881 cri.go:96] found id: "f9fd815214df519a24be93449738661909fed82f445d131a65a8612d71e272f5"
	I0110 02:26:43.511654  339881 cri.go:96] found id: "8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c"
	I0110 02:26:43.511658  339881 cri.go:96] found id: "a3702be271010b730a0b039b595f580aade0d0fada0f4c3df45c272ac1f72362"
	I0110 02:26:43.511663  339881 cri.go:96] found id: ""
	I0110 02:26:43.511716  339881 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:43.869354  339881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:43.885282  339881 pause.go:52] kubelet running: false
	I0110 02:26:43.885340  339881 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:44.073915  339881 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:44.074006  339881 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:44.157298  339881 cri.go:96] found id: "d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d"
	I0110 02:26:44.157319  339881 cri.go:96] found id: "79048f816d46d1e234f04cdab6aadfd5b104a8ee6bf21b0051f30d8b57b09a60"
	I0110 02:26:44.157324  339881 cri.go:96] found id: "76b971ba08e5b5ff2201aff088c59de43f57741f6fadbf6e9bfc040609df53a3"
	I0110 02:26:44.157329  339881 cri.go:96] found id: "0e6a52db2dcfcfef532da1ce2dfb3828fd0f341ff1ac04c62c9c7597a7e5b0bb"
	I0110 02:26:44.157333  339881 cri.go:96] found id: "22dfa9178e0c2dabea7eeb2af3d49aacace8b6745d6dfb7c7d46775997590b14"
	I0110 02:26:44.157337  339881 cri.go:96] found id: "6ad7afd00fb45f713bc2a231314f18f547e221ac07c9582f185c8dff172c458a"
	I0110 02:26:44.157341  339881 cri.go:96] found id: "f9119f08da7d53c43f8344b07645c1ff5515e403a8b6a95b251708f15accb6e0"
	I0110 02:26:44.157345  339881 cri.go:96] found id: "577d187f5e859dca2b5e47fdbe503d26687fcc21697de51827c8e09a3554993c"
	I0110 02:26:44.157349  339881 cri.go:96] found id: "f9fd815214df519a24be93449738661909fed82f445d131a65a8612d71e272f5"
	I0110 02:26:44.157356  339881 cri.go:96] found id: "8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c"
	I0110 02:26:44.157360  339881 cri.go:96] found id: "a3702be271010b730a0b039b595f580aade0d0fada0f4c3df45c272ac1f72362"
	I0110 02:26:44.157364  339881 cri.go:96] found id: ""
	I0110 02:26:44.157406  339881 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:44.556978  339881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:26:44.576685  339881 pause.go:52] kubelet running: false
	I0110 02:26:44.576747  339881 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:26:44.726206  339881 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:26:44.726290  339881 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:26:44.792250  339881 cri.go:96] found id: "d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d"
	I0110 02:26:44.792275  339881 cri.go:96] found id: "79048f816d46d1e234f04cdab6aadfd5b104a8ee6bf21b0051f30d8b57b09a60"
	I0110 02:26:44.792285  339881 cri.go:96] found id: "76b971ba08e5b5ff2201aff088c59de43f57741f6fadbf6e9bfc040609df53a3"
	I0110 02:26:44.792290  339881 cri.go:96] found id: "0e6a52db2dcfcfef532da1ce2dfb3828fd0f341ff1ac04c62c9c7597a7e5b0bb"
	I0110 02:26:44.792294  339881 cri.go:96] found id: "22dfa9178e0c2dabea7eeb2af3d49aacace8b6745d6dfb7c7d46775997590b14"
	I0110 02:26:44.792299  339881 cri.go:96] found id: "6ad7afd00fb45f713bc2a231314f18f547e221ac07c9582f185c8dff172c458a"
	I0110 02:26:44.792303  339881 cri.go:96] found id: "f9119f08da7d53c43f8344b07645c1ff5515e403a8b6a95b251708f15accb6e0"
	I0110 02:26:44.792307  339881 cri.go:96] found id: "577d187f5e859dca2b5e47fdbe503d26687fcc21697de51827c8e09a3554993c"
	I0110 02:26:44.792312  339881 cri.go:96] found id: "f9fd815214df519a24be93449738661909fed82f445d131a65a8612d71e272f5"
	I0110 02:26:44.792323  339881 cri.go:96] found id: "8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c"
	I0110 02:26:44.792328  339881 cri.go:96] found id: "a3702be271010b730a0b039b595f580aade0d0fada0f4c3df45c272ac1f72362"
	I0110 02:26:44.792333  339881 cri.go:96] found id: ""
	I0110 02:26:44.792378  339881 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:26:44.805754  339881 out.go:203] 
	W0110 02:26:44.806966  339881 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:26:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:26:44.806983  339881 out.go:285] * 
	* 
	W0110 02:26:44.808606  339881 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:26:44.809603  339881 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-190877 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-190877
helpers_test.go:244: (dbg) docker inspect no-preload-190877:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022",
	        "Created": "2026-01-10T02:24:22.284558877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327491,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:25:40.848156855Z",
	            "FinishedAt": "2026-01-10T02:25:39.819292788Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/hostname",
	        "HostsPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/hosts",
	        "LogPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022-json.log",
	        "Name": "/no-preload-190877",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-190877:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-190877",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022",
	                "LowerDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-190877",
	                "Source": "/var/lib/docker/volumes/no-preload-190877/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-190877",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-190877",
	                "name.minikube.sigs.k8s.io": "no-preload-190877",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1d1c9bf439f124e34ce463f0015be375777a6b0b40905a5f100b42e6ce85260e",
	            "SandboxKey": "/var/run/docker/netns/1d1c9bf439f1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-190877": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6a77220e3dd22bdd3789c842dfc9aca093d12a84cb3c74b1a1cb51e3e4df363",
	                    "EndpointID": "0a1d6d391e4efd057785157f423db5b91cf78d6d114c65b148e98b9acad32c52",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:ac:58:0d:a4:86",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-190877",
	                        "311ec206bd98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190877 -n no-preload-190877
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190877 -n no-preload-190877: exit status 2 (316.463038ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-190877 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-190877 logs -n 25: (1.099061146s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-188604 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-872415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p embed-certs-872415 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable metrics-server -p no-preload-190877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p no-preload-190877 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-188604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-313784 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ embed-certs-872415 image list --format=json                                                                                                                                                                                                   │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p embed-certs-872415 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ image   │ no-preload-190877 image list --format=json                                                                                                                                                                                                    │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p no-preload-190877 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:38.395701  338461 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:38.395954  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.395962  338461 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:38.395966  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.396156  338461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:38.396626  338461 out.go:368] Setting JSON to false
	I0110 02:26:38.397992  338461 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4147,"bootTime":1768007851,"procs":455,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:38.398046  338461 start.go:143] virtualization: kvm guest
	I0110 02:26:38.399795  338461 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:38.400823  338461 notify.go:221] Checking for updates...
	I0110 02:26:38.400839  338461 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:38.401952  338461 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:38.403142  338461 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:38.404397  338461 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:38.405512  338461 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:38.406412  338461 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:38.407953  338461 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408047  338461 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408138  338461 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408217  338461 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:38.434056  338461 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:38.434192  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.492093  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.480726897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.492192  338461 docker.go:319] overlay module found
	I0110 02:26:38.493713  338461 out.go:179] * Using the docker driver based on user configuration
	I0110 02:26:38.494702  338461 start.go:309] selected driver: docker
	I0110 02:26:38.494716  338461 start.go:928] validating driver "docker" against <nil>
	I0110 02:26:38.494729  338461 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:38.495359  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.549669  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.540019441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.549849  338461 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 02:26:38.549882  338461 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 02:26:38.550158  338461 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:26:38.552024  338461 out.go:179] * Using Docker driver with root privileges
	I0110 02:26:38.553057  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:38.553113  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:38.553122  338461 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:26:38.553168  338461 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:38.554252  338461 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:26:38.555155  338461 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:38.556242  338461 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:38.557247  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:38.557276  338461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:38.557288  338461 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:38.557342  338461 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:38.557382  338461 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:38.557395  338461 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:38.557518  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:38.557546  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json: {Name:mk980e5e7d4c45bf0d1bdc96021cfe1dfa9563b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:38.578353  338461 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:38.578368  338461 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:38.578383  338461 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:38.578406  338461 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:38.578491  338461 start.go:364] duration metric: took 71.777µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:26:38.578513  338461 start.go:93] Provisioning new machine with config: &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:38.578574  338461 start.go:125] createHost starting for "" (driver="docker")
	W0110 02:26:37.984376  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:40.485189  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:38.579999  338461 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:26:38.580204  338461 start.go:159] libmachine.API.Create for "newest-cni-843779" (driver="docker")
	I0110 02:26:38.580227  338461 client.go:173] LocalClient.Create starting
	I0110 02:26:38.580292  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 02:26:38.580322  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580343  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580394  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 02:26:38.580414  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580432  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580717  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:26:38.596966  338461 cli_runner.go:211] docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:26:38.597028  338461 network_create.go:284] running [docker network inspect newest-cni-843779] to gather additional debugging logs...
	I0110 02:26:38.597049  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779
	W0110 02:26:38.613182  338461 cli_runner.go:211] docker network inspect newest-cni-843779 returned with exit code 1
	I0110 02:26:38.613209  338461 network_create.go:287] error running [docker network inspect newest-cni-843779]: docker network inspect newest-cni-843779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-843779 not found
	I0110 02:26:38.613225  338461 network_create.go:289] output of [docker network inspect newest-cni-843779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-843779 not found
	
	** /stderr **
	I0110 02:26:38.613341  338461 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:38.630396  338461 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
	I0110 02:26:38.631029  338461 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6b93c57cdce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:4c:65:68:38:06} reservation:<nil>}
	I0110 02:26:38.631780  338461 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c494a40b219 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:38:5d:78:96:da} reservation:<nil>}
	I0110 02:26:38.632287  338461 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a77220e3dd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:16:c1:44:08:5d} reservation:<nil>}
	I0110 02:26:38.633099  338461 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea9360}
	I0110 02:26:38.633118  338461 network_create.go:124] attempt to create docker network newest-cni-843779 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:26:38.633156  338461 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-843779 newest-cni-843779
	I0110 02:26:38.681030  338461 network_create.go:108] docker network newest-cni-843779 192.168.85.0/24 created
	I0110 02:26:38.681058  338461 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-843779" container
	I0110 02:26:38.681110  338461 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:26:38.698815  338461 cli_runner.go:164] Run: docker volume create newest-cni-843779 --label name.minikube.sigs.k8s.io=newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:26:38.715947  338461 oci.go:103] Successfully created a docker volume newest-cni-843779
	I0110 02:26:38.716014  338461 cli_runner.go:164] Run: docker run --rm --name newest-cni-843779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --entrypoint /usr/bin/test -v newest-cni-843779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:26:39.139879  338461 oci.go:107] Successfully prepared a docker volume newest-cni-843779
	I0110 02:26:39.139985  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:39.140001  338461 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:26:39.140074  338461 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:26:43.148608  338461 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.00849465s)
	I0110 02:26:43.148642  338461 kic.go:203] duration metric: took 4.008637849s to extract preloaded images to volume ...
	W0110 02:26:43.148739  338461 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 02:26:43.148767  338461 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 02:26:43.148804  338461 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:26:43.204668  338461 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-843779 --name newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-843779 --network newest-cni-843779 --ip 192.168.85.2 --volume newest-cni-843779:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	
	
	==> CRI-O <==
	Jan 10 02:26:09 no-preload-190877 crio[572]: time="2026-01-10T02:26:09.234011952Z" level=info msg="Started container" PID=1774 containerID=9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper id=528dbbb1-2737-45b1-9f2c-32ea695bc51d name=/runtime.v1.RuntimeService/StartContainer sandboxID=29a3f9a0ee0760170e2bb6e1b7767ba73c19412add7ba4d49c1b53ca7f89d02a
	Jan 10 02:26:09 no-preload-190877 crio[572]: time="2026-01-10T02:26:09.282293992Z" level=info msg="Removing container: 30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4" id=d345bb4a-0e70-46fe-8309-68e745830fe1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:09 no-preload-190877 crio[572]: time="2026-01-10T02:26:09.290807059Z" level=info msg="Removed container 30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper" id=d345bb4a-0e70-46fe-8309-68e745830fe1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.308052445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f58376f1-cc96-48d3-a859-ecd601f80cb9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.308968459Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=59af9452-7b44-4cc6-9ab0-2c73e4efe560 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.310045679Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b996db31-7397-4314-8606-6bb42cbfb3c3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.310186958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.314527392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.314724405Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/71813854fc6f5c8c3a800e2ee520773dac2c951d6f8175d54d803944272abbdc/merged/etc/passwd: no such file or directory"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.314759899Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/71813854fc6f5c8c3a800e2ee520773dac2c951d6f8175d54d803944272abbdc/merged/etc/group: no such file or directory"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.315067252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.355353942Z" level=info msg="Created container d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d: kube-system/storage-provisioner/storage-provisioner" id=b996db31-7397-4314-8606-6bb42cbfb3c3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.356042209Z" level=info msg="Starting container: d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d" id=157c9edd-ecea-46c6-b26a-a825d43c5517 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.357997371Z" level=info msg="Started container" PID=1788 containerID=d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d description=kube-system/storage-provisioner/storage-provisioner id=157c9edd-ecea-46c6-b26a-a825d43c5517 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69a17f439f2b2bdd12af0366b4285d221f3b5e02e3197fcf2d060b8a99f27ef4
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.176831954Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2c125f3a-18c2-45ff-9501-f9ecf3906519 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.178435001Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7cde4f14-8be1-47ca-8409-96be6af818b8 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.179634886Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper" id=ccc666a3-69ff-49f1-95c6-d9785a320620 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.179776963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.199720861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.200221994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.227619273Z" level=info msg="Created container 8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper" id=ccc666a3-69ff-49f1-95c6-d9785a320620 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.228186312Z" level=info msg="Starting container: 8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c" id=3eee7ec6-8a96-4273-9521-02f9f5ba1dfd name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.229939246Z" level=info msg="Started container" PID=1828 containerID=8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper id=3eee7ec6-8a96-4273-9521-02f9f5ba1dfd name=/runtime.v1.RuntimeService/StartContainer sandboxID=29a3f9a0ee0760170e2bb6e1b7767ba73c19412add7ba4d49c1b53ca7f89d02a
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.356033772Z" level=info msg="Removing container: 9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321" id=cbf0e8d2-4535-45ed-9037-0efccc5cdb09 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.372369115Z" level=info msg="Removed container 9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper" id=cbf0e8d2-4535-45ed-9037-0efccc5cdb09 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8e822d01138e5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   29a3f9a0ee076       dashboard-metrics-scraper-867fb5f87b-7c6t6   kubernetes-dashboard
	d881f109617d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   69a17f439f2b2       storage-provisioner                          kube-system
	a3702be271010       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   b87bf3c28fe6c       kubernetes-dashboard-b84665fb8-tc5gq         kubernetes-dashboard
	e608fbfbe948e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   8d5e6f0de36ac       busybox                                      default
	79048f816d46d       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   cc7c7762e5e8e       coredns-7d764666f9-xrkw6                     kube-system
	76b971ba08e5b       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           56 seconds ago      Running             kindnet-cni                 0                   9da61c6045d0d       kindnet-rz9kz                                kube-system
	0e6a52db2dcfc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   69a17f439f2b2       storage-provisioner                          kube-system
	22dfa9178e0c2       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           56 seconds ago      Running             kube-proxy                  0                   1ff6344135540       kube-proxy-hrztb                             kube-system
	6ad7afd00fb45       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           58 seconds ago      Running             kube-apiserver              0                   0e84bb014fd4a       kube-apiserver-no-preload-190877             kube-system
	f9119f08da7d5       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           58 seconds ago      Running             etcd                        0                   792e0ef362668       etcd-no-preload-190877                       kube-system
	577d187f5e859       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           58 seconds ago      Running             kube-controller-manager     0                   87a619a85dfd1       kube-controller-manager-no-preload-190877    kube-system
	f9fd815214df5       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           58 seconds ago      Running             kube-scheduler              0                   52c204ed0ad8c       kube-scheduler-no-preload-190877             kube-system
	
	
	==> coredns [79048f816d46d1e234f04cdab6aadfd5b104a8ee6bf21b0051f30d8b57b09a60] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56514 - 42904 "HINFO IN 3110586486686972049.1206236292656618893. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.112275469s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-190877
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-190877
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=no-preload-190877
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-190877
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:39 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:39 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:39 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:26:39 +0000   Sat, 10 Jan 2026 02:25:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-190877
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                d8ee769a-5dd7-45e1-8492-7abe20102f5b
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-xrkw6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-190877                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-rz9kz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-190877              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-190877     200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-hrztb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-190877              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7c6t6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-tc5gq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node no-preload-190877 event: Registered Node no-preload-190877 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-190877 event: Registered Node no-preload-190877 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [f9119f08da7d53c43f8344b07645c1ff5515e403a8b6a95b251708f15accb6e0] <==
	{"level":"info","ts":"2026-01-10T02:25:47.762612Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:25:47.762688Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:25:47.762692Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:47.762719Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:47.762144Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-10T02:25:47.762813Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:25:47.762932Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:25:47.950716Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:47.950776Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:47.950847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:47.951040Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:47.951078Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:47.951579Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:47.951617Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:47.951640Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:47.951651Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:47.952320Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-190877 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:25:47.952492Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:47.953994Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:47.954098Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:47.954182Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:47.954486Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:47.958579Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:25:47.966725Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:47.970394Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:26:45 up  1:09,  0 user,  load average: 3.43, 3.49, 2.38
	Linux no-preload-190877 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [76b971ba08e5b5ff2201aff088c59de43f57741f6fadbf6e9bfc040609df53a3] <==
	I0110 02:25:49.853548       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:25:49.853950       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:25:49.854132       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:25:49.854155       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:25:49.854183       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:25:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:25:50.151049       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:25:50.151137       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:25:50.151618       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:25:50.179282       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:25:50.651985       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:25:50.652018       1 metrics.go:72] Registering metrics
	I0110 02:25:50.652108       1 controller.go:711] "Syncing nftables rules"
	I0110 02:26:00.080491       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:00.080578       1 main.go:301] handling current node
	I0110 02:26:10.079474       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:10.079516       1 main.go:301] handling current node
	I0110 02:26:20.079626       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:20.079662       1 main.go:301] handling current node
	I0110 02:26:30.079979       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:30.080040       1 main.go:301] handling current node
	I0110 02:26:40.079968       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:40.080018       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ad7afd00fb45f713bc2a231314f18f547e221ac07c9582f185c8dff172c458a] <==
	I0110 02:25:49.170319       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:25:49.152016       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:49.170326       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:25:49.155449       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:49.152002       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 02:25:49.170866       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:25:49.152031       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:25:49.159508       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:25:49.179037       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:25:49.179326       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:25:49.183765       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:49.183835       1 policy_source.go:248] refreshing policies
	E0110 02:25:49.184138       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:25:49.201336       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:25:49.299564       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:25:49.545662       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:25:49.605133       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:25:49.635284       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:25:49.644543       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:25:49.715229       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.115.135"}
	I0110 02:25:49.725432       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.217.245"}
	I0110 02:25:50.053393       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:25:52.712285       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:25:52.964365       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:25:53.009510       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [577d187f5e859dca2b5e47fdbe503d26687fcc21697de51827c8e09a3554993c] <==
	I0110 02:25:52.319969       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320157       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320164       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.319650       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320340       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320348       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320408       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.319233       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320688       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:25:52.320817       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.321859       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.322392       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.322652       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.323012       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-190877"
	I0110 02:25:52.323071       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.323439       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.324092       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.324135       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.324139       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.324169       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 02:25:52.328658       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.417470       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.417487       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:25:52.417492       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:25:52.419654       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [22dfa9178e0c2dabea7eeb2af3d49aacace8b6745d6dfb7c7d46775997590b14] <==
	I0110 02:25:49.627767       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:25:49.709070       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:49.809269       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:49.809307       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:25:49.809404       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:25:49.841661       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:25:49.841852       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:25:49.850703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:25:49.851200       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:25:49.851672       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:49.854428       1 config.go:200] "Starting service config controller"
	I0110 02:25:49.854545       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:25:49.854581       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:25:49.854826       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:25:49.855034       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:25:49.855103       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:25:49.856263       1 config.go:309] "Starting node config controller"
	I0110 02:25:49.856711       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:25:49.856757       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:25:49.955243       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:25:49.955397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:25:49.955959       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f9fd815214df519a24be93449738661909fed82f445d131a65a8612d71e272f5] <==
	I0110 02:25:47.872103       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:25:49.094582       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:25:49.094624       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:25:49.094637       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:25:49.094646       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:25:49.128298       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:25:49.128402       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:49.137693       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:25:49.137752       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:49.138675       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:25:49.139031       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:25:49.238249       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:26:02 no-preload-190877 kubelet[721]: E0110 02:26:02.263807     721 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-190877" containerName="kube-scheduler"
	Jan 10 02:26:04 no-preload-190877 kubelet[721]: E0110 02:26:04.527513     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:04 no-preload-190877 kubelet[721]: I0110 02:26:04.527560     721 scope.go:122] "RemoveContainer" containerID="30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4"
	Jan 10 02:26:04 no-preload-190877 kubelet[721]: E0110 02:26:04.527731     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7c6t6_kubernetes-dashboard(e829f179-0791-459a-8807-58a38cc7d25b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" podUID="e829f179-0791-459a-8807-58a38cc7d25b"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: E0110 02:26:09.176108     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: I0110 02:26:09.176153     721 scope.go:122] "RemoveContainer" containerID="30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: I0110 02:26:09.281117     721 scope.go:122] "RemoveContainer" containerID="30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: E0110 02:26:09.281304     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: I0110 02:26:09.281336     721 scope.go:122] "RemoveContainer" containerID="9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: E0110 02:26:09.281540     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7c6t6_kubernetes-dashboard(e829f179-0791-459a-8807-58a38cc7d25b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" podUID="e829f179-0791-459a-8807-58a38cc7d25b"
	Jan 10 02:26:14 no-preload-190877 kubelet[721]: E0110 02:26:14.527623     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:14 no-preload-190877 kubelet[721]: I0110 02:26:14.527671     721 scope.go:122] "RemoveContainer" containerID="9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321"
	Jan 10 02:26:14 no-preload-190877 kubelet[721]: E0110 02:26:14.527879     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7c6t6_kubernetes-dashboard(e829f179-0791-459a-8807-58a38cc7d25b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" podUID="e829f179-0791-459a-8807-58a38cc7d25b"
	Jan 10 02:26:20 no-preload-190877 kubelet[721]: I0110 02:26:20.307588     721 scope.go:122] "RemoveContainer" containerID="0e6a52db2dcfcfef532da1ce2dfb3828fd0f341ff1ac04c62c9c7597a7e5b0bb"
	Jan 10 02:26:28 no-preload-190877 kubelet[721]: E0110 02:26:28.578680     721 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xrkw6" containerName="coredns"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: E0110 02:26:37.176317     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: I0110 02:26:37.176390     721 scope.go:122] "RemoveContainer" containerID="9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: I0110 02:26:37.354332     721 scope.go:122] "RemoveContainer" containerID="9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: E0110 02:26:37.354610     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: I0110 02:26:37.354653     721 scope.go:122] "RemoveContainer" containerID="8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: E0110 02:26:37.355166     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7c6t6_kubernetes-dashboard(e829f179-0791-459a-8807-58a38cc7d25b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" podUID="e829f179-0791-459a-8807-58a38cc7d25b"
	Jan 10 02:26:42 no-preload-190877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:26:42 no-preload-190877 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:26:42 no-preload-190877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:26:42 no-preload-190877 systemd[1]: kubelet.service: Consumed 1.752s CPU time.
	
	
	==> kubernetes-dashboard [a3702be271010b730a0b039b595f580aade0d0fada0f4c3df45c272ac1f72362] <==
	2026/01/10 02:26:00 Starting overwatch
	2026/01/10 02:26:00 Using namespace: kubernetes-dashboard
	2026/01/10 02:26:00 Using in-cluster config to connect to apiserver
	2026/01/10 02:26:00 Using secret token for csrf signing
	2026/01/10 02:26:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:26:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:26:00 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:26:00 Generating JWE encryption key
	2026/01/10 02:26:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:26:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:26:00 Initializing JWE encryption key from synchronized object
	2026/01/10 02:26:00 Creating in-cluster Sidecar client
	2026/01/10 02:26:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:26:00 Serving insecurely on HTTP port: 9090
	2026/01/10 02:26:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0e6a52db2dcfcfef532da1ce2dfb3828fd0f341ff1ac04c62c9c7597a7e5b0bb] <==
	I0110 02:25:49.587265       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:26:19.593665       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d] <==
	I0110 02:26:20.371017       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:26:20.377559       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:26:20.377612       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:26:20.379689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:23.834994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:28.095578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:31.693862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:34.747544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:37.770178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:37.774590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:26:37.774706       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:26:37.774768       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d495bc3-96f4-4c63-bede-e941f6968552", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-190877_9929f9da-2aa7-4e4e-b82f-0811b72ed4e2 became leader
	I0110 02:26:37.774862       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-190877_9929f9da-2aa7-4e4e-b82f-0811b72ed4e2!
	W0110 02:26:37.776578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:37.788748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:26:37.875547       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-190877_9929f9da-2aa7-4e4e-b82f-0811b72ed4e2!
	W0110 02:26:39.792003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:39.796521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:41.800777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:41.902676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:43.906439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:43.911100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:45.915003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:45.919816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-190877 -n no-preload-190877
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-190877 -n no-preload-190877: exit status 2 (331.203108ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-190877 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-190877
helpers_test.go:244: (dbg) docker inspect no-preload-190877:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022",
	        "Created": "2026-01-10T02:24:22.284558877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327491,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:25:40.848156855Z",
	            "FinishedAt": "2026-01-10T02:25:39.819292788Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/hostname",
	        "HostsPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/hosts",
	        "LogPath": "/var/lib/docker/containers/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022/311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022-json.log",
	        "Name": "/no-preload-190877",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-190877:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-190877",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311ec206bd98540230d9991cdced8295abbea85ea74f9abf890db435d0429022",
	                "LowerDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84ab0bb8866ee4678c4719972a253ab9120b411c15a7ab4242484a58eec08125/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-190877",
	                "Source": "/var/lib/docker/volumes/no-preload-190877/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-190877",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-190877",
	                "name.minikube.sigs.k8s.io": "no-preload-190877",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1d1c9bf439f124e34ce463f0015be375777a6b0b40905a5f100b42e6ce85260e",
	            "SandboxKey": "/var/run/docker/netns/1d1c9bf439f1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-190877": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6a77220e3dd22bdd3789c842dfc9aca093d12a84cb3c74b1a1cb51e3e4df363",
	                    "EndpointID": "0a1d6d391e4efd057785157f423db5b91cf78d6d114c65b148e98b9acad32c52",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:ac:58:0d:a4:86",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-190877",
	                        "311ec206bd98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190877 -n no-preload-190877
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190877 -n no-preload-190877: exit status 2 (341.321115ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-190877 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-190877 logs -n 25: (1.072333619s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-647049 sudo cat /var/lib/kubelet/config.yaml                                                                                                         │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo systemctl status docker --all --full --no-pager                                                                                          │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │                     │
	│ ssh     │ -p kindnet-647049 sudo systemctl cat docker --no-pager                                                                                                          │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo cat /etc/docker/daemon.json                                                                                                              │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │                     │
	│ ssh     │ -p kindnet-647049 sudo docker system info                                                                                                                       │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │                     │
	│ ssh     │ -p kindnet-647049 sudo systemctl status cri-docker --all --full --no-pager                                                                                      │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │                     │
	│ ssh     │ -p kindnet-647049 sudo systemctl cat cri-docker --no-pager                                                                                                      │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                 │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │                     │
	│ ssh     │ -p kindnet-647049 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                           │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p calico-647049 pgrep -a kubelet                                                                                                                               │ calico-647049             │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo cri-dockerd --version                                                                                                                    │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo systemctl status containerd --all --full --no-pager                                                                                      │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │                     │
	│ ssh     │ -p kindnet-647049 sudo systemctl cat containerd --no-pager                                                                                                      │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo cat /lib/systemd/system/containerd.service                                                                                               │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo cat /etc/containerd/config.toml                                                                                                          │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo containerd config dump                                                                                                                   │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo systemctl status crio --all --full --no-pager                                                                                            │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo systemctl cat crio --no-pager                                                                                                            │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                  │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p kindnet-647049 sudo crio config                                                                                                                              │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ delete  │ -p kindnet-647049                                                                                                                                               │ kindnet-647049            │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ start   │ -p enable-default-cni-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio │ enable-default-cni-647049 │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:23 UTC │
	│ ssh     │ -p custom-flannel-647049 pgrep -a kubelet                                                                                                                       │ custom-flannel-647049     │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p calico-647049 sudo cat /etc/nsswitch.conf                                                                                                                    │ calico-647049             │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ ssh     │ -p calico-647049 sudo cat /etc/hosts                                                                                                                            │ calico-647049             │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:38.395701  338461 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:38.395954  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.395962  338461 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:38.395966  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.396156  338461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:38.396626  338461 out.go:368] Setting JSON to false
	I0110 02:26:38.397992  338461 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4147,"bootTime":1768007851,"procs":455,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:38.398046  338461 start.go:143] virtualization: kvm guest
	I0110 02:26:38.399795  338461 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:38.400823  338461 notify.go:221] Checking for updates...
	I0110 02:26:38.400839  338461 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:38.401952  338461 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:38.403142  338461 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:38.404397  338461 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:38.405512  338461 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:38.406412  338461 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:38.407953  338461 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408047  338461 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408138  338461 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408217  338461 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:38.434056  338461 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:38.434192  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.492093  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.480726897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.492192  338461 docker.go:319] overlay module found
	I0110 02:26:38.493713  338461 out.go:179] * Using the docker driver based on user configuration
	I0110 02:26:38.494702  338461 start.go:309] selected driver: docker
	I0110 02:26:38.494716  338461 start.go:928] validating driver "docker" against <nil>
	I0110 02:26:38.494729  338461 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:38.495359  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.549669  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.540019441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.549849  338461 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 02:26:38.549882  338461 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 02:26:38.550158  338461 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:26:38.552024  338461 out.go:179] * Using Docker driver with root privileges
	I0110 02:26:38.553057  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:38.553113  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:38.553122  338461 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:26:38.553168  338461 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:38.554252  338461 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:26:38.555155  338461 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:38.556242  338461 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:38.557247  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:38.557276  338461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:38.557288  338461 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:38.557342  338461 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:38.557382  338461 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:38.557395  338461 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:38.557518  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:38.557546  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json: {Name:mk980e5e7d4c45bf0d1bdc96021cfe1dfa9563b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:38.578353  338461 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:38.578368  338461 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:38.578383  338461 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:38.578406  338461 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:38.578491  338461 start.go:364] duration metric: took 71.777µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:26:38.578513  338461 start.go:93] Provisioning new machine with config: &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:38.578574  338461 start.go:125] createHost starting for "" (driver="docker")
	W0110 02:26:37.984376  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:40.485189  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:38.579999  338461 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:26:38.580204  338461 start.go:159] libmachine.API.Create for "newest-cni-843779" (driver="docker")
	I0110 02:26:38.580227  338461 client.go:173] LocalClient.Create starting
	I0110 02:26:38.580292  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 02:26:38.580322  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580343  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580394  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 02:26:38.580414  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580432  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580717  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:26:38.596966  338461 cli_runner.go:211] docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:26:38.597028  338461 network_create.go:284] running [docker network inspect newest-cni-843779] to gather additional debugging logs...
	I0110 02:26:38.597049  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779
	W0110 02:26:38.613182  338461 cli_runner.go:211] docker network inspect newest-cni-843779 returned with exit code 1
	I0110 02:26:38.613209  338461 network_create.go:287] error running [docker network inspect newest-cni-843779]: docker network inspect newest-cni-843779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-843779 not found
	I0110 02:26:38.613225  338461 network_create.go:289] output of [docker network inspect newest-cni-843779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-843779 not found
	
	** /stderr **
	I0110 02:26:38.613341  338461 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:38.630396  338461 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
	I0110 02:26:38.631029  338461 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6b93c57cdce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:4c:65:68:38:06} reservation:<nil>}
	I0110 02:26:38.631780  338461 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c494a40b219 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:38:5d:78:96:da} reservation:<nil>}
	I0110 02:26:38.632287  338461 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a77220e3dd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:16:c1:44:08:5d} reservation:<nil>}
	I0110 02:26:38.633099  338461 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea9360}
	I0110 02:26:38.633118  338461 network_create.go:124] attempt to create docker network newest-cni-843779 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:26:38.633156  338461 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-843779 newest-cni-843779
	I0110 02:26:38.681030  338461 network_create.go:108] docker network newest-cni-843779 192.168.85.0/24 created
	I0110 02:26:38.681058  338461 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-843779" container
	I0110 02:26:38.681110  338461 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:26:38.698815  338461 cli_runner.go:164] Run: docker volume create newest-cni-843779 --label name.minikube.sigs.k8s.io=newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:26:38.715947  338461 oci.go:103] Successfully created a docker volume newest-cni-843779
	I0110 02:26:38.716014  338461 cli_runner.go:164] Run: docker run --rm --name newest-cni-843779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --entrypoint /usr/bin/test -v newest-cni-843779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:26:39.139879  338461 oci.go:107] Successfully prepared a docker volume newest-cni-843779
	I0110 02:26:39.139985  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:39.140001  338461 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:26:39.140074  338461 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:26:43.148608  338461 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.00849465s)
	I0110 02:26:43.148642  338461 kic.go:203] duration metric: took 4.008637849s to extract preloaded images to volume ...
	W0110 02:26:43.148739  338461 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 02:26:43.148767  338461 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 02:26:43.148804  338461 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:26:43.204668  338461 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-843779 --name newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-843779 --network newest-cni-843779 --ip 192.168.85.2 --volume newest-cni-843779:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	
	
	==> CRI-O <==
	Jan 10 02:26:09 no-preload-190877 crio[572]: time="2026-01-10T02:26:09.234011952Z" level=info msg="Started container" PID=1774 containerID=9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper id=528dbbb1-2737-45b1-9f2c-32ea695bc51d name=/runtime.v1.RuntimeService/StartContainer sandboxID=29a3f9a0ee0760170e2bb6e1b7767ba73c19412add7ba4d49c1b53ca7f89d02a
	Jan 10 02:26:09 no-preload-190877 crio[572]: time="2026-01-10T02:26:09.282293992Z" level=info msg="Removing container: 30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4" id=d345bb4a-0e70-46fe-8309-68e745830fe1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:09 no-preload-190877 crio[572]: time="2026-01-10T02:26:09.290807059Z" level=info msg="Removed container 30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper" id=d345bb4a-0e70-46fe-8309-68e745830fe1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.308052445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f58376f1-cc96-48d3-a859-ecd601f80cb9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.308968459Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=59af9452-7b44-4cc6-9ab0-2c73e4efe560 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.310045679Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b996db31-7397-4314-8606-6bb42cbfb3c3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.310186958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.314527392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.314724405Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/71813854fc6f5c8c3a800e2ee520773dac2c951d6f8175d54d803944272abbdc/merged/etc/passwd: no such file or directory"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.314759899Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/71813854fc6f5c8c3a800e2ee520773dac2c951d6f8175d54d803944272abbdc/merged/etc/group: no such file or directory"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.315067252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.355353942Z" level=info msg="Created container d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d: kube-system/storage-provisioner/storage-provisioner" id=b996db31-7397-4314-8606-6bb42cbfb3c3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.356042209Z" level=info msg="Starting container: d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d" id=157c9edd-ecea-46c6-b26a-a825d43c5517 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:20 no-preload-190877 crio[572]: time="2026-01-10T02:26:20.357997371Z" level=info msg="Started container" PID=1788 containerID=d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d description=kube-system/storage-provisioner/storage-provisioner id=157c9edd-ecea-46c6-b26a-a825d43c5517 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69a17f439f2b2bdd12af0366b4285d221f3b5e02e3197fcf2d060b8a99f27ef4
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.176831954Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2c125f3a-18c2-45ff-9501-f9ecf3906519 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.178435001Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7cde4f14-8be1-47ca-8409-96be6af818b8 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.179634886Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper" id=ccc666a3-69ff-49f1-95c6-d9785a320620 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.179776963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.199720861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.200221994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.227619273Z" level=info msg="Created container 8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper" id=ccc666a3-69ff-49f1-95c6-d9785a320620 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.228186312Z" level=info msg="Starting container: 8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c" id=3eee7ec6-8a96-4273-9521-02f9f5ba1dfd name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.229939246Z" level=info msg="Started container" PID=1828 containerID=8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper id=3eee7ec6-8a96-4273-9521-02f9f5ba1dfd name=/runtime.v1.RuntimeService/StartContainer sandboxID=29a3f9a0ee0760170e2bb6e1b7767ba73c19412add7ba4d49c1b53ca7f89d02a
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.356033772Z" level=info msg="Removing container: 9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321" id=cbf0e8d2-4535-45ed-9037-0efccc5cdb09 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:37 no-preload-190877 crio[572]: time="2026-01-10T02:26:37.372369115Z" level=info msg="Removed container 9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6/dashboard-metrics-scraper" id=cbf0e8d2-4535-45ed-9037-0efccc5cdb09 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8e822d01138e5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   29a3f9a0ee076       dashboard-metrics-scraper-867fb5f87b-7c6t6   kubernetes-dashboard
	d881f109617d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   69a17f439f2b2       storage-provisioner                          kube-system
	a3702be271010       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago       Running             kubernetes-dashboard        0                   b87bf3c28fe6c       kubernetes-dashboard-b84665fb8-tc5gq         kubernetes-dashboard
	e608fbfbe948e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   8d5e6f0de36ac       busybox                                      default
	79048f816d46d       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           58 seconds ago       Running             coredns                     0                   cc7c7762e5e8e       coredns-7d764666f9-xrkw6                     kube-system
	76b971ba08e5b       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           58 seconds ago       Running             kindnet-cni                 0                   9da61c6045d0d       kindnet-rz9kz                                kube-system
	0e6a52db2dcfc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   69a17f439f2b2       storage-provisioner                          kube-system
	22dfa9178e0c2       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           58 seconds ago       Running             kube-proxy                  0                   1ff6344135540       kube-proxy-hrztb                             kube-system
	6ad7afd00fb45       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           59 seconds ago       Running             kube-apiserver              0                   0e84bb014fd4a       kube-apiserver-no-preload-190877             kube-system
	f9119f08da7d5       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           59 seconds ago       Running             etcd                        0                   792e0ef362668       etcd-no-preload-190877                       kube-system
	577d187f5e859       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           59 seconds ago       Running             kube-controller-manager     0                   87a619a85dfd1       kube-controller-manager-no-preload-190877    kube-system
	f9fd815214df5       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           About a minute ago   Running             kube-scheduler              0                   52c204ed0ad8c       kube-scheduler-no-preload-190877             kube-system
	
	
	==> coredns [79048f816d46d1e234f04cdab6aadfd5b104a8ee6bf21b0051f30d8b57b09a60] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56514 - 42904 "HINFO IN 3110586486686972049.1206236292656618893. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.112275469s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-190877
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-190877
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=no-preload-190877
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_24_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:24:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-190877
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:39 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:39 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:39 +0000   Sat, 10 Jan 2026 02:24:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:26:39 +0000   Sat, 10 Jan 2026 02:25:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-190877
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                d8ee769a-5dd7-45e1-8492-7abe20102f5b
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-7d764666f9-xrkw6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-190877                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-rz9kz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-190877              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-190877     200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-hrztb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-190877              100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7c6t6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-tc5gq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node no-preload-190877 event: Registered Node no-preload-190877 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-190877 event: Registered Node no-preload-190877 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [f9119f08da7d53c43f8344b07645c1ff5515e403a8b6a95b251708f15accb6e0] <==
	{"level":"info","ts":"2026-01-10T02:25:47.762612Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:25:47.762688Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:25:47.762692Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:47.762719Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:25:47.762144Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-10T02:25:47.762813Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:25:47.762932Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:25:47.950716Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:47.950776Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:47.950847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:25:47.951040Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:47.951078Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:47.951579Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:47.951617Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:25:47.951640Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:47.951651Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:25:47.952320Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-190877 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:25:47.952492Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:47.953994Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:47.954098Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:25:47.954182Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:47.954486Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:25:47.958579Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:25:47.966725Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:25:47.970394Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:26:47 up  1:09,  0 user,  load average: 3.24, 3.45, 2.38
	Linux no-preload-190877 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [76b971ba08e5b5ff2201aff088c59de43f57741f6fadbf6e9bfc040609df53a3] <==
	I0110 02:25:49.853548       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:25:49.853950       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:25:49.854132       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:25:49.854155       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:25:49.854183       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:25:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:25:50.151049       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:25:50.151137       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:25:50.151618       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:25:50.179282       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:25:50.651985       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:25:50.652018       1 metrics.go:72] Registering metrics
	I0110 02:25:50.652108       1 controller.go:711] "Syncing nftables rules"
	I0110 02:26:00.080491       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:00.080578       1 main.go:301] handling current node
	I0110 02:26:10.079474       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:10.079516       1 main.go:301] handling current node
	I0110 02:26:20.079626       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:20.079662       1 main.go:301] handling current node
	I0110 02:26:30.079979       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:30.080040       1 main.go:301] handling current node
	I0110 02:26:40.079968       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:26:40.080018       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ad7afd00fb45f713bc2a231314f18f547e221ac07c9582f185c8dff172c458a] <==
	I0110 02:25:49.170319       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:25:49.152016       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:49.170326       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:25:49.155449       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:49.152002       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 02:25:49.170866       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:25:49.152031       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:25:49.159508       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:25:49.179037       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:25:49.179326       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:25:49.183765       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:49.183835       1 policy_source.go:248] refreshing policies
	E0110 02:25:49.184138       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:25:49.201336       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:25:49.299564       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:25:49.545662       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:25:49.605133       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:25:49.635284       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:25:49.644543       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:25:49.715229       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.115.135"}
	I0110 02:25:49.725432       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.217.245"}
	I0110 02:25:50.053393       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:25:52.712285       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:25:52.964365       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:25:53.009510       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [577d187f5e859dca2b5e47fdbe503d26687fcc21697de51827c8e09a3554993c] <==
	I0110 02:25:52.319969       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320157       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320164       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.319650       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320340       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320348       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320408       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.319233       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.320688       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:25:52.320817       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.321859       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.322392       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.322652       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.323012       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-190877"
	I0110 02:25:52.323071       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.323439       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.324092       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.324135       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.324139       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.324169       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 02:25:52.328658       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.417470       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:52.417487       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:25:52.417492       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:25:52.419654       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [22dfa9178e0c2dabea7eeb2af3d49aacace8b6745d6dfb7c7d46775997590b14] <==
	I0110 02:25:49.627767       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:25:49.709070       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:49.809269       1 shared_informer.go:377] "Caches are synced"
	I0110 02:25:49.809307       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:25:49.809404       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:25:49.841661       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:25:49.841852       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:25:49.850703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:25:49.851200       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:25:49.851672       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:49.854428       1 config.go:200] "Starting service config controller"
	I0110 02:25:49.854545       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:25:49.854581       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:25:49.854826       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:25:49.855034       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:25:49.855103       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:25:49.856263       1 config.go:309] "Starting node config controller"
	I0110 02:25:49.856711       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:25:49.856757       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:25:49.955243       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:25:49.955397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:25:49.955959       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f9fd815214df519a24be93449738661909fed82f445d131a65a8612d71e272f5] <==
	I0110 02:25:47.872103       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:25:49.094582       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:25:49.094624       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:25:49.094637       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:25:49.094646       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:25:49.128298       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:25:49.128402       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:25:49.137693       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:25:49.137752       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:25:49.138675       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:25:49.139031       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:25:49.238249       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:26:02 no-preload-190877 kubelet[721]: E0110 02:26:02.263807     721 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-190877" containerName="kube-scheduler"
	Jan 10 02:26:04 no-preload-190877 kubelet[721]: E0110 02:26:04.527513     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:04 no-preload-190877 kubelet[721]: I0110 02:26:04.527560     721 scope.go:122] "RemoveContainer" containerID="30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4"
	Jan 10 02:26:04 no-preload-190877 kubelet[721]: E0110 02:26:04.527731     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7c6t6_kubernetes-dashboard(e829f179-0791-459a-8807-58a38cc7d25b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" podUID="e829f179-0791-459a-8807-58a38cc7d25b"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: E0110 02:26:09.176108     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: I0110 02:26:09.176153     721 scope.go:122] "RemoveContainer" containerID="30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: I0110 02:26:09.281117     721 scope.go:122] "RemoveContainer" containerID="30f05404dc96d8518fe6718eabe114eb4c957852be1105919ff82b8a1f8310c4"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: E0110 02:26:09.281304     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: I0110 02:26:09.281336     721 scope.go:122] "RemoveContainer" containerID="9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321"
	Jan 10 02:26:09 no-preload-190877 kubelet[721]: E0110 02:26:09.281540     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7c6t6_kubernetes-dashboard(e829f179-0791-459a-8807-58a38cc7d25b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" podUID="e829f179-0791-459a-8807-58a38cc7d25b"
	Jan 10 02:26:14 no-preload-190877 kubelet[721]: E0110 02:26:14.527623     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:14 no-preload-190877 kubelet[721]: I0110 02:26:14.527671     721 scope.go:122] "RemoveContainer" containerID="9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321"
	Jan 10 02:26:14 no-preload-190877 kubelet[721]: E0110 02:26:14.527879     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7c6t6_kubernetes-dashboard(e829f179-0791-459a-8807-58a38cc7d25b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" podUID="e829f179-0791-459a-8807-58a38cc7d25b"
	Jan 10 02:26:20 no-preload-190877 kubelet[721]: I0110 02:26:20.307588     721 scope.go:122] "RemoveContainer" containerID="0e6a52db2dcfcfef532da1ce2dfb3828fd0f341ff1ac04c62c9c7597a7e5b0bb"
	Jan 10 02:26:28 no-preload-190877 kubelet[721]: E0110 02:26:28.578680     721 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xrkw6" containerName="coredns"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: E0110 02:26:37.176317     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: I0110 02:26:37.176390     721 scope.go:122] "RemoveContainer" containerID="9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: I0110 02:26:37.354332     721 scope.go:122] "RemoveContainer" containerID="9a5076026ef48c7a7f2240f4254a4d800edf39cc5a7afd0c5a2bb64baf3cb321"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: E0110 02:26:37.354610     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: I0110 02:26:37.354653     721 scope.go:122] "RemoveContainer" containerID="8e822d01138e57c1d3a787ae671b674c6ef901db2be46337fb7e0aef30bdc29c"
	Jan 10 02:26:37 no-preload-190877 kubelet[721]: E0110 02:26:37.355166     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7c6t6_kubernetes-dashboard(e829f179-0791-459a-8807-58a38cc7d25b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7c6t6" podUID="e829f179-0791-459a-8807-58a38cc7d25b"
	Jan 10 02:26:42 no-preload-190877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:26:42 no-preload-190877 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:26:42 no-preload-190877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:26:42 no-preload-190877 systemd[1]: kubelet.service: Consumed 1.752s CPU time.
	
	
	==> kubernetes-dashboard [a3702be271010b730a0b039b595f580aade0d0fada0f4c3df45c272ac1f72362] <==
	2026/01/10 02:26:00 Using namespace: kubernetes-dashboard
	2026/01/10 02:26:00 Using in-cluster config to connect to apiserver
	2026/01/10 02:26:00 Using secret token for csrf signing
	2026/01/10 02:26:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:26:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:26:00 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:26:00 Generating JWE encryption key
	2026/01/10 02:26:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:26:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:26:00 Initializing JWE encryption key from synchronized object
	2026/01/10 02:26:00 Creating in-cluster Sidecar client
	2026/01/10 02:26:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:26:00 Serving insecurely on HTTP port: 9090
	2026/01/10 02:26:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:26:00 Starting overwatch
	
	
	==> storage-provisioner [0e6a52db2dcfcfef532da1ce2dfb3828fd0f341ff1ac04c62c9c7597a7e5b0bb] <==
	I0110 02:25:49.587265       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:26:19.593665       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d881f109617d9a7f932521b4944235e08450940c2ab3582c653f4da86ac6507d] <==
	I0110 02:26:20.377559       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:26:20.377612       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:26:20.379689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:23.834994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:28.095578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:31.693862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:34.747544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:37.770178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:37.774590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:26:37.774706       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:26:37.774768       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d495bc3-96f4-4c63-bede-e941f6968552", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-190877_9929f9da-2aa7-4e4e-b82f-0811b72ed4e2 became leader
	I0110 02:26:37.774862       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-190877_9929f9da-2aa7-4e4e-b82f-0811b72ed4e2!
	W0110 02:26:37.776578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:37.788748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:26:37.875547       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-190877_9929f9da-2aa7-4e4e-b82f-0811b72ed4e2!
	W0110 02:26:39.792003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:39.796521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:41.800777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:41.902676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:43.906439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:43.911100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:45.915003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:45.919816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:47.923659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:47.927946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-190877 -n no-preload-190877
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-190877 -n no-preload-190877: exit status 2 (337.031004ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-190877 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-313784 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-313784 --alsologtostderr -v=1: exit status 80 (2.469447096s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-313784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:27:02.757514  345122 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:27:02.757806  345122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:27:02.757817  345122 out.go:374] Setting ErrFile to fd 2...
	I0110 02:27:02.757821  345122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:27:02.758071  345122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:27:02.758350  345122 out.go:368] Setting JSON to false
	I0110 02:27:02.758372  345122 mustload.go:66] Loading cluster: default-k8s-diff-port-313784
	I0110 02:27:02.758769  345122 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:02.759172  345122 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-313784 --format={{.State.Status}}
	I0110 02:27:02.777991  345122 host.go:66] Checking if "default-k8s-diff-port-313784" exists ...
	I0110 02:27:02.778312  345122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:27:02.840314  345122 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2026-01-10 02:27:02.830710198 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:27:02.841032  345122 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-313784 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:27:02.843263  345122 out.go:179] * Pausing node default-k8s-diff-port-313784 ... 
	I0110 02:27:02.844342  345122 host.go:66] Checking if "default-k8s-diff-port-313784" exists ...
	I0110 02:27:02.844584  345122 ssh_runner.go:195] Run: systemctl --version
	I0110 02:27:02.844626  345122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-313784
	I0110 02:27:02.862350  345122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/default-k8s-diff-port-313784/id_rsa Username:docker}
	I0110 02:27:02.954513  345122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:27:02.967132  345122 pause.go:52] kubelet running: true
	I0110 02:27:02.967199  345122 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:27:03.147536  345122 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:27:03.147612  345122 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:27:03.224349  345122 cri.go:96] found id: "f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f"
	I0110 02:27:03.224367  345122 cri.go:96] found id: "e9c87ab85de9c59e7f2a0e811771f9c88502d8f5dbd60ccfb4eecf174cee932f"
	I0110 02:27:03.224388  345122 cri.go:96] found id: "85a1be97122158afba1c7dc996f622d1063cec041014b0cc8bbe1c378ba119d4"
	I0110 02:27:03.224392  345122 cri.go:96] found id: "998034535f5da2818ee887132648e0f2c4ce8e2dd9984530238973083e214dad"
	I0110 02:27:03.224394  345122 cri.go:96] found id: "5a6b196ace1351b1c9640bb3a22624c7d34f7250b1aa608bdb4d91bcb09f31b4"
	I0110 02:27:03.224398  345122 cri.go:96] found id: "35cfd8caca1ffb3ed069875a6f4df02737c571e205d4cb57ddce696a7018cd87"
	I0110 02:27:03.224401  345122 cri.go:96] found id: "fc29eda71f4bde30696f3da25f43c0e08c5a51d939a947924ad7303cd468a80f"
	I0110 02:27:03.224404  345122 cri.go:96] found id: "b5de7f05c48c095e9fef4efb74abefe8eb07be5b286dca9f1e02db1c8c79c371"
	I0110 02:27:03.224406  345122 cri.go:96] found id: "6f7b3a029a3bc4ba4e3633368af6270be9e6945d669d649d76e7070308610a5d"
	I0110 02:27:03.224417  345122 cri.go:96] found id: "d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81"
	I0110 02:27:03.224423  345122 cri.go:96] found id: "2f752a6224d46906bbadd0c1a12d9b82fc4244b9f4c554a86ec2fefa82fb86f8"
	I0110 02:27:03.224425  345122 cri.go:96] found id: ""
	I0110 02:27:03.224462  345122 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:27:03.235957  345122 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:03Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:27:03.578332  345122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:27:03.593650  345122 pause.go:52] kubelet running: false
	I0110 02:27:03.593717  345122 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:27:03.786709  345122 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:27:03.786799  345122 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:27:03.859025  345122 cri.go:96] found id: "f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f"
	I0110 02:27:03.859042  345122 cri.go:96] found id: "e9c87ab85de9c59e7f2a0e811771f9c88502d8f5dbd60ccfb4eecf174cee932f"
	I0110 02:27:03.859046  345122 cri.go:96] found id: "85a1be97122158afba1c7dc996f622d1063cec041014b0cc8bbe1c378ba119d4"
	I0110 02:27:03.859050  345122 cri.go:96] found id: "998034535f5da2818ee887132648e0f2c4ce8e2dd9984530238973083e214dad"
	I0110 02:27:03.859053  345122 cri.go:96] found id: "5a6b196ace1351b1c9640bb3a22624c7d34f7250b1aa608bdb4d91bcb09f31b4"
	I0110 02:27:03.859057  345122 cri.go:96] found id: "35cfd8caca1ffb3ed069875a6f4df02737c571e205d4cb57ddce696a7018cd87"
	I0110 02:27:03.859060  345122 cri.go:96] found id: "fc29eda71f4bde30696f3da25f43c0e08c5a51d939a947924ad7303cd468a80f"
	I0110 02:27:03.859062  345122 cri.go:96] found id: "b5de7f05c48c095e9fef4efb74abefe8eb07be5b286dca9f1e02db1c8c79c371"
	I0110 02:27:03.859065  345122 cri.go:96] found id: "6f7b3a029a3bc4ba4e3633368af6270be9e6945d669d649d76e7070308610a5d"
	I0110 02:27:03.859080  345122 cri.go:96] found id: "d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81"
	I0110 02:27:03.859083  345122 cri.go:96] found id: "2f752a6224d46906bbadd0c1a12d9b82fc4244b9f4c554a86ec2fefa82fb86f8"
	I0110 02:27:03.859087  345122 cri.go:96] found id: ""
	I0110 02:27:03.859121  345122 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:27:04.185534  345122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:27:04.202472  345122 pause.go:52] kubelet running: false
	I0110 02:27:04.202537  345122 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:27:04.392615  345122 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:27:04.392705  345122 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:27:04.471120  345122 cri.go:96] found id: "f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f"
	I0110 02:27:04.471139  345122 cri.go:96] found id: "e9c87ab85de9c59e7f2a0e811771f9c88502d8f5dbd60ccfb4eecf174cee932f"
	I0110 02:27:04.471143  345122 cri.go:96] found id: "85a1be97122158afba1c7dc996f622d1063cec041014b0cc8bbe1c378ba119d4"
	I0110 02:27:04.471146  345122 cri.go:96] found id: "998034535f5da2818ee887132648e0f2c4ce8e2dd9984530238973083e214dad"
	I0110 02:27:04.471149  345122 cri.go:96] found id: "5a6b196ace1351b1c9640bb3a22624c7d34f7250b1aa608bdb4d91bcb09f31b4"
	I0110 02:27:04.471153  345122 cri.go:96] found id: "35cfd8caca1ffb3ed069875a6f4df02737c571e205d4cb57ddce696a7018cd87"
	I0110 02:27:04.471155  345122 cri.go:96] found id: "fc29eda71f4bde30696f3da25f43c0e08c5a51d939a947924ad7303cd468a80f"
	I0110 02:27:04.471158  345122 cri.go:96] found id: "b5de7f05c48c095e9fef4efb74abefe8eb07be5b286dca9f1e02db1c8c79c371"
	I0110 02:27:04.471161  345122 cri.go:96] found id: "6f7b3a029a3bc4ba4e3633368af6270be9e6945d669d649d76e7070308610a5d"
	I0110 02:27:04.471166  345122 cri.go:96] found id: "d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81"
	I0110 02:27:04.471169  345122 cri.go:96] found id: "2f752a6224d46906bbadd0c1a12d9b82fc4244b9f4c554a86ec2fefa82fb86f8"
	I0110 02:27:04.471171  345122 cri.go:96] found id: ""
	I0110 02:27:04.471205  345122 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:27:04.888336  345122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:27:04.900713  345122 pause.go:52] kubelet running: false
	I0110 02:27:04.900762  345122 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:27:05.064725  345122 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:27:05.064799  345122 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:27:05.138154  345122 cri.go:96] found id: "f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f"
	I0110 02:27:05.138174  345122 cri.go:96] found id: "e9c87ab85de9c59e7f2a0e811771f9c88502d8f5dbd60ccfb4eecf174cee932f"
	I0110 02:27:05.138180  345122 cri.go:96] found id: "85a1be97122158afba1c7dc996f622d1063cec041014b0cc8bbe1c378ba119d4"
	I0110 02:27:05.138185  345122 cri.go:96] found id: "998034535f5da2818ee887132648e0f2c4ce8e2dd9984530238973083e214dad"
	I0110 02:27:05.138189  345122 cri.go:96] found id: "5a6b196ace1351b1c9640bb3a22624c7d34f7250b1aa608bdb4d91bcb09f31b4"
	I0110 02:27:05.138194  345122 cri.go:96] found id: "35cfd8caca1ffb3ed069875a6f4df02737c571e205d4cb57ddce696a7018cd87"
	I0110 02:27:05.138199  345122 cri.go:96] found id: "fc29eda71f4bde30696f3da25f43c0e08c5a51d939a947924ad7303cd468a80f"
	I0110 02:27:05.138203  345122 cri.go:96] found id: "b5de7f05c48c095e9fef4efb74abefe8eb07be5b286dca9f1e02db1c8c79c371"
	I0110 02:27:05.138207  345122 cri.go:96] found id: "6f7b3a029a3bc4ba4e3633368af6270be9e6945d669d649d76e7070308610a5d"
	I0110 02:27:05.138215  345122 cri.go:96] found id: "d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81"
	I0110 02:27:05.138219  345122 cri.go:96] found id: "2f752a6224d46906bbadd0c1a12d9b82fc4244b9f4c554a86ec2fefa82fb86f8"
	I0110 02:27:05.138223  345122 cri.go:96] found id: ""
	I0110 02:27:05.138263  345122 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:27:05.153000  345122 out.go:203] 
	W0110 02:27:05.154120  345122 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:27:05.154135  345122 out.go:285] * 
	* 
	W0110 02:27:05.155815  345122 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:27:05.156811  345122 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-313784 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-313784
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-313784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85",
	        "Created": "2026-01-10T02:25:05.094879814Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:26:08.052047922Z",
	            "FinishedAt": "2026-01-10T02:26:06.826792406Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/hostname",
	        "HostsPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/hosts",
	        "LogPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85-json.log",
	        "Name": "/default-k8s-diff-port-313784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-313784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-313784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85",
	                "LowerDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-313784",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-313784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-313784",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-313784",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-313784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1ca499599115196c3b145215ebbbb6f40a13d0fce5a9186a5856403c4249e129",
	            "SandboxKey": "/var/run/docker/netns/1ca499599115",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-313784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0894fcffb6ef151230a0e511493b85c03422956c47ed99558a627394939589f6",
	                    "EndpointID": "c38b724ed5e5e001b005e4b75dc23f70bf8545d6e0e76ca20a664e9a9fbb9551",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f6:57:33:29:ff:72",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-313784",
	                        "40f734d8ee9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784: exit status 2 (352.816057ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-313784 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-313784 logs -n 25: (1.003913557s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-313784 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ embed-certs-872415 image list --format=json                                                                                                                                                                                                   │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p embed-certs-872415 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:27 UTC │
	│ image   │ no-preload-190877 image list --format=json                                                                                                                                                                                                    │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p no-preload-190877 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ default-k8s-diff-port-313784 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ pause   │ -p default-k8s-diff-port-313784 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ stop    │ -p newest-cni-843779 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:38.395701  338461 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:38.395954  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.395962  338461 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:38.395966  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.396156  338461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:38.396626  338461 out.go:368] Setting JSON to false
	I0110 02:26:38.397992  338461 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4147,"bootTime":1768007851,"procs":455,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:38.398046  338461 start.go:143] virtualization: kvm guest
	I0110 02:26:38.399795  338461 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:38.400823  338461 notify.go:221] Checking for updates...
	I0110 02:26:38.400839  338461 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:38.401952  338461 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:38.403142  338461 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:38.404397  338461 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:38.405512  338461 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:38.406412  338461 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:38.407953  338461 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408047  338461 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408138  338461 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408217  338461 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:38.434056  338461 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:38.434192  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.492093  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.480726897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.492192  338461 docker.go:319] overlay module found
	I0110 02:26:38.493713  338461 out.go:179] * Using the docker driver based on user configuration
	I0110 02:26:38.494702  338461 start.go:309] selected driver: docker
	I0110 02:26:38.494716  338461 start.go:928] validating driver "docker" against <nil>
	I0110 02:26:38.494729  338461 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:38.495359  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.549669  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.540019441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.549849  338461 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 02:26:38.549882  338461 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 02:26:38.550158  338461 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:26:38.552024  338461 out.go:179] * Using Docker driver with root privileges
	I0110 02:26:38.553057  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:38.553113  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:38.553122  338461 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:26:38.553168  338461 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:38.554252  338461 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:26:38.555155  338461 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:38.556242  338461 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:38.557247  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:38.557276  338461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:38.557288  338461 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:38.557342  338461 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:38.557382  338461 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:38.557395  338461 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:38.557518  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:38.557546  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json: {Name:mk980e5e7d4c45bf0d1bdc96021cfe1dfa9563b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:38.578353  338461 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:38.578368  338461 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:38.578383  338461 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:38.578406  338461 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:38.578491  338461 start.go:364] duration metric: took 71.777µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:26:38.578513  338461 start.go:93] Provisioning new machine with config: &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:38.578574  338461 start.go:125] createHost starting for "" (driver="docker")
	W0110 02:26:37.984376  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:40.485189  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:38.579999  338461 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:26:38.580204  338461 start.go:159] libmachine.API.Create for "newest-cni-843779" (driver="docker")
	I0110 02:26:38.580227  338461 client.go:173] LocalClient.Create starting
	I0110 02:26:38.580292  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 02:26:38.580322  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580343  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580394  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 02:26:38.580414  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580432  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580717  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:26:38.596966  338461 cli_runner.go:211] docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:26:38.597028  338461 network_create.go:284] running [docker network inspect newest-cni-843779] to gather additional debugging logs...
	I0110 02:26:38.597049  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779
	W0110 02:26:38.613182  338461 cli_runner.go:211] docker network inspect newest-cni-843779 returned with exit code 1
	I0110 02:26:38.613209  338461 network_create.go:287] error running [docker network inspect newest-cni-843779]: docker network inspect newest-cni-843779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-843779 not found
	I0110 02:26:38.613225  338461 network_create.go:289] output of [docker network inspect newest-cni-843779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-843779 not found
	
	** /stderr **
	I0110 02:26:38.613341  338461 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:38.630396  338461 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
	I0110 02:26:38.631029  338461 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6b93c57cdce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:4c:65:68:38:06} reservation:<nil>}
	I0110 02:26:38.631780  338461 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c494a40b219 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:38:5d:78:96:da} reservation:<nil>}
	I0110 02:26:38.632287  338461 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a77220e3dd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:16:c1:44:08:5d} reservation:<nil>}
	I0110 02:26:38.633099  338461 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea9360}
	I0110 02:26:38.633118  338461 network_create.go:124] attempt to create docker network newest-cni-843779 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:26:38.633156  338461 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-843779 newest-cni-843779
	I0110 02:26:38.681030  338461 network_create.go:108] docker network newest-cni-843779 192.168.85.0/24 created
	I0110 02:26:38.681058  338461 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-843779" container
	I0110 02:26:38.681110  338461 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:26:38.698815  338461 cli_runner.go:164] Run: docker volume create newest-cni-843779 --label name.minikube.sigs.k8s.io=newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:26:38.715947  338461 oci.go:103] Successfully created a docker volume newest-cni-843779
	I0110 02:26:38.716014  338461 cli_runner.go:164] Run: docker run --rm --name newest-cni-843779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --entrypoint /usr/bin/test -v newest-cni-843779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:26:39.139879  338461 oci.go:107] Successfully prepared a docker volume newest-cni-843779
	I0110 02:26:39.139985  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:39.140001  338461 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:26:39.140074  338461 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:26:43.148608  338461 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.00849465s)
	I0110 02:26:43.148642  338461 kic.go:203] duration metric: took 4.008637849s to extract preloaded images to volume ...
	W0110 02:26:43.148739  338461 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 02:26:43.148767  338461 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 02:26:43.148804  338461 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:26:43.204668  338461 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-843779 --name newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-843779 --network newest-cni-843779 --ip 192.168.85.2 --volume newest-cni-843779:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	W0110 02:26:42.983710  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:44.983765  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:46.984713  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:43.527936  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Running}}
	I0110 02:26:43.548293  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.567102  338461 cli_runner.go:164] Run: docker exec newest-cni-843779 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:26:43.613558  338461 oci.go:144] the created container "newest-cni-843779" has a running status.
	I0110 02:26:43.613590  338461 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa...
	I0110 02:26:43.684437  338461 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:26:43.713852  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.736219  338461 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:26:43.736257  338461 kic_runner.go:114] Args: [docker exec --privileged newest-cni-843779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:26:43.785594  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.805775  338461 machine.go:94] provisionDockerMachine start ...
	I0110 02:26:43.805896  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:43.831840  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:43.832223  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:43.832251  338461 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:26:43.833032  338461 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35784->127.0.0.1:33130: read: connection reset by peer
	I0110 02:26:46.969499  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:26:46.969526  338461 ubuntu.go:182] provisioning hostname "newest-cni-843779"
	I0110 02:26:46.969593  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:46.991696  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:46.992031  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:46.992054  338461 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-843779 && echo "newest-cni-843779" | sudo tee /etc/hostname
	I0110 02:26:47.136043  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:26:47.136128  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.157826  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:47.158110  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:47.158139  338461 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-843779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-843779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-843779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:26:47.285266  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:26:47.285296  338461 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:26:47.285326  338461 ubuntu.go:190] setting up certificates
	I0110 02:26:47.285339  338461 provision.go:84] configureAuth start
	I0110 02:26:47.285388  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:47.306123  338461 provision.go:143] copyHostCerts
	I0110 02:26:47.306186  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:26:47.306200  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:26:47.306285  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:26:47.306444  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:26:47.306459  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:26:47.306503  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:26:47.306586  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:26:47.306597  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:26:47.306634  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:26:47.306711  338461 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.newest-cni-843779 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-843779]
	I0110 02:26:47.449507  338461 provision.go:177] copyRemoteCerts
	I0110 02:26:47.449566  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:26:47.449610  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.470425  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:47.566450  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:26:47.585229  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:26:47.602746  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:26:47.620541  338461 provision.go:87] duration metric: took 335.183446ms to configureAuth
	I0110 02:26:47.620570  338461 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:26:47.620817  338461 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:47.620959  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.640508  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:47.640816  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:47.640845  338461 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:26:47.907810  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:26:47.907838  338461 machine.go:97] duration metric: took 4.102037206s to provisionDockerMachine
	I0110 02:26:47.907850  338461 client.go:176] duration metric: took 9.327615152s to LocalClient.Create
	I0110 02:26:47.907873  338461 start.go:167] duration metric: took 9.327668738s to libmachine.API.Create "newest-cni-843779"
	I0110 02:26:47.907895  338461 start.go:293] postStartSetup for "newest-cni-843779" (driver="docker")
	I0110 02:26:47.907908  338461 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:26:47.907974  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:26:47.908018  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.928412  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.024000  338461 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:26:48.027481  338461 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:26:48.027509  338461 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:26:48.027520  338461 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:26:48.027567  338461 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:26:48.027683  338461 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:26:48.027841  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:26:48.035276  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:48.055326  338461 start.go:296] duration metric: took 147.417971ms for postStartSetup
	I0110 02:26:48.055713  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:48.075567  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:48.075921  338461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:26:48.075971  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.097098  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.195147  338461 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:26:48.201200  338461 start.go:128] duration metric: took 9.622613291s to createHost
	I0110 02:26:48.201223  338461 start.go:83] releasing machines lock for "newest-cni-843779", held for 9.622720302s
	I0110 02:26:48.201284  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:48.220675  338461 ssh_runner.go:195] Run: cat /version.json
	I0110 02:26:48.220716  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.220775  338461 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:26:48.220842  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.243579  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.243844  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.405575  338461 ssh_runner.go:195] Run: systemctl --version
	I0110 02:26:48.411977  338461 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:26:48.446783  338461 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:26:48.451861  338461 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:26:48.451946  338461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:26:48.478187  338461 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0110 02:26:48.478210  338461 start.go:496] detecting cgroup driver to use...
	I0110 02:26:48.478243  338461 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:26:48.478288  338461 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:26:48.496294  338461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:26:48.508994  338461 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:26:48.509050  338461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:26:48.526619  338461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:26:48.546200  338461 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:26:48.630754  338461 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:26:48.721548  338461 docker.go:234] disabling docker service ...
	I0110 02:26:48.721596  338461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:26:48.741103  338461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:26:48.754750  338461 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:26:48.849106  338461 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:26:48.926371  338461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:26:48.938571  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:26:48.953463  338461 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:26:48.953530  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.967831  338461 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:26:48.967929  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.981096  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.994270  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.003708  338461 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:26:49.012357  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.021802  338461 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.034747  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.043418  338461 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:26:49.050386  338461 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:26:49.057269  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:49.130961  338461 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:26:49.285916  338461 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:26:49.285981  338461 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:26:49.289691  338461 start.go:574] Will wait 60s for crictl version
	I0110 02:26:49.289750  338461 ssh_runner.go:195] Run: which crictl
	I0110 02:26:49.293070  338461 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:26:49.316456  338461 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:26:49.316525  338461 ssh_runner.go:195] Run: crio --version
	I0110 02:26:49.343597  338461 ssh_runner.go:195] Run: crio --version
	I0110 02:26:49.371114  338461 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:26:49.372159  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:49.389573  338461 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:26:49.393453  338461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:49.404679  338461 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:26:49.405677  338461 kubeadm.go:884] updating cluster {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:26:49.405793  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:49.405837  338461 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:49.440734  338461 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:49.440758  338461 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:26:49.440812  338461 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:49.469164  338461 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:49.469186  338461 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:26:49.469194  338461 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:26:49.469275  338461 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-843779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:26:49.469338  338461 ssh_runner.go:195] Run: crio config
	I0110 02:26:49.516476  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:49.516496  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:49.516510  338461 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:26:49.516530  338461 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-843779 NodeName:newest-cni-843779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:26:49.516639  338461 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-843779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:26:49.516699  338461 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:26:49.524516  338461 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:26:49.524573  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:26:49.532047  338461 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:26:49.543799  338461 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:26:49.557580  338461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 02:26:49.569161  338461 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:26:49.572423  338461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:49.581744  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:49.662065  338461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:26:49.689947  338461 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779 for IP: 192.168.85.2
	I0110 02:26:49.689968  338461 certs.go:195] generating shared ca certs ...
	I0110 02:26:49.689987  338461 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.690118  338461 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:26:49.690155  338461 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:26:49.690165  338461 certs.go:257] generating profile certs ...
	I0110 02:26:49.690213  338461 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key
	I0110 02:26:49.690230  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt with IP's: []
	I0110 02:26:49.756357  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt ...
	I0110 02:26:49.756381  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt: {Name:mk133e41b9f631c1d31398329e120a6d2e8c733e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.756536  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key ...
	I0110 02:26:49.756548  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key: {Name:mk1a6751a5bfd0db1a5029ef4003e6943a863573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.756626  338461 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5
	I0110 02:26:49.756641  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:26:49.820417  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 ...
	I0110 02:26:49.820450  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5: {Name:mk37a665bf86ff3fb7ea7a72608ed18515127576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.820601  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5 ...
	I0110 02:26:49.820613  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5: {Name:mkcfe121d9bb2cde5a393290decd7e10f53e5ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.820724  338461 certs.go:382] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt
	I0110 02:26:49.820839  338461 certs.go:386] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5 -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key
	I0110 02:26:49.820918  338461 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key
	I0110 02:26:49.820934  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt with IP's: []
	I0110 02:26:49.878096  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt ...
	I0110 02:26:49.878116  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt: {Name:mk7fbd29eafac26d0fd2ce98341bca7262aa29d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.878239  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key ...
	I0110 02:26:49.878251  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key: {Name:mk6eed83d5bd4bb5410d906db9b88c82acb84bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.878412  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:26:49.878451  338461 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:26:49.878461  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:26:49.878484  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:26:49.878507  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:26:49.878530  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:26:49.878568  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:49.879163  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:26:49.896970  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:26:49.913364  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:26:49.930220  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:26:49.947718  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:26:49.965127  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:26:49.984031  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:26:50.002271  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:26:50.020474  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:26:50.039132  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:26:50.055600  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:26:50.075592  338461 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:26:50.090453  338461 ssh_runner.go:195] Run: openssl version
	I0110 02:26:50.096522  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.103714  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:26:50.111443  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.115011  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.115064  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.153459  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:26:50.160763  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14086.pem /etc/ssl/certs/51391683.0
	I0110 02:26:50.168017  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.174985  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:26:50.182015  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.185610  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.185650  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.221124  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:26:50.228348  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/140862.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:26:50.235190  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.242739  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:26:50.249485  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.252727  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.252768  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.288639  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:26:50.296037  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:26:50.303287  338461 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:26:50.306535  338461 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:26:50.306586  338461 kubeadm.go:401] StartCluster: {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:50.306674  338461 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:26:50.306717  338461 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:26:50.333629  338461 cri.go:96] found id: ""
	I0110 02:26:50.333685  338461 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:26:50.341547  338461 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:26:50.349445  338461 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:26:50.349507  338461 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:26:50.360715  338461 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:26:50.360732  338461 kubeadm.go:158] found existing configuration files:
	
	I0110 02:26:50.360788  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:26:50.386174  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:26:50.386241  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:26:50.395383  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:26:50.410044  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:26:50.410107  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:26:50.418596  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:26:50.428393  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:26:50.428444  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:26:50.436482  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:26:50.444188  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:26:50.444240  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:26:50.452657  338461 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:26:50.494415  338461 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:26:50.494503  338461 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:26:50.563724  338461 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:26:50.563829  338461 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0110 02:26:50.563876  338461 kubeadm.go:319] OS: Linux
	I0110 02:26:50.563960  338461 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:26:50.564045  338461 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:26:50.564142  338461 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:26:50.564234  338461 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:26:50.564307  338461 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:26:50.564383  338461 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:26:50.564454  338461 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:26:50.564517  338461 kubeadm.go:319] CGROUPS_IO: enabled
	I0110 02:26:50.622544  338461 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:26:50.622713  338461 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:26:50.622878  338461 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:26:50.630207  338461 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0110 02:26:48.986976  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:49.484087  333054 pod_ready.go:94] pod "coredns-7d764666f9-rhgg5" is "Ready"
	I0110 02:26:49.484114  333054 pod_ready.go:86] duration metric: took 31.505548695s for pod "coredns-7d764666f9-rhgg5" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.486734  333054 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.490527  333054 pod_ready.go:94] pod "etcd-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.490552  333054 pod_ready.go:86] duration metric: took 3.797789ms for pod "etcd-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.492326  333054 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.495598  333054 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.495619  333054 pod_ready.go:86] duration metric: took 3.274816ms for pod "kube-apiserver-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.499483  333054 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.682853  333054 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.682877  333054 pod_ready.go:86] duration metric: took 183.376493ms for pod "kube-controller-manager-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.882938  333054 pod_ready.go:83] waiting for pod "kube-proxy-6dcdf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.283112  333054 pod_ready.go:94] pod "kube-proxy-6dcdf" is "Ready"
	I0110 02:26:50.283137  333054 pod_ready.go:86] duration metric: took 400.175094ms for pod "kube-proxy-6dcdf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.483261  333054 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.883249  333054 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:50.883281  333054 pod_ready.go:86] duration metric: took 399.994421ms for pod "kube-scheduler-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.883295  333054 pod_ready.go:40] duration metric: took 32.90814338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:50.927315  333054 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:26:50.939799  333054 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-313784" cluster and "default" namespace by default
	I0110 02:26:50.638660  338461 out.go:252]   - Generating certificates and keys ...
	I0110 02:26:50.638761  338461 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:26:50.638847  338461 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:26:50.748673  338461 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:26:50.808044  338461 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:26:50.825988  338461 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:26:51.053436  338461 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:26:51.189722  338461 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:26:51.189935  338461 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-843779] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:26:51.265783  338461 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:26:51.266038  338461 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-843779] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:26:51.453075  338461 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:26:51.463944  338461 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:26:51.492453  338461 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:26:51.492568  338461 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:26:51.639156  338461 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:26:51.713669  338461 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:26:51.737323  338461 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:26:51.771362  338461 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:26:51.803764  338461 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:26:51.804316  338461 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:26:51.807706  338461 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:26:51.809141  338461 out.go:252]   - Booting up control plane ...
	I0110 02:26:51.809224  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:26:51.809305  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:26:51.810160  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:26:51.836089  338461 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:26:51.836223  338461 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:26:51.843051  338461 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:26:51.843358  338461 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:26:51.843421  338461 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:26:51.940122  338461 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:26:51.940253  338461 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:26:52.440845  338461 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.777212ms
	I0110 02:26:52.443950  338461 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:26:52.444086  338461 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0110 02:26:52.444167  338461 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:26:52.444244  338461 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 02:26:53.449008  338461 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004897675s
	I0110 02:26:54.312692  338461 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.868609736s
	I0110 02:26:55.946035  338461 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501977175s
	I0110 02:26:55.963146  338461 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:26:55.973142  338461 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:26:55.980674  338461 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:26:55.980865  338461 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-843779 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:26:55.987964  338461 kubeadm.go:319] [bootstrap-token] Using token: 1ffugu.jcse9fz4pyvkzq7m
	I0110 02:26:55.989224  338461 out.go:252]   - Configuring RBAC rules ...
	I0110 02:26:55.989358  338461 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:26:55.992014  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:26:55.996727  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:26:55.998941  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:26:56.001013  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:26:56.003983  338461 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:26:56.351224  338461 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:26:56.766422  338461 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:26:57.351738  338461 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:26:57.352701  338461 kubeadm.go:319] 
	I0110 02:26:57.352806  338461 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:26:57.352825  338461 kubeadm.go:319] 
	I0110 02:26:57.352959  338461 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:26:57.352968  338461 kubeadm.go:319] 
	I0110 02:26:57.352998  338461 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:26:57.353086  338461 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:26:57.353183  338461 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:26:57.353201  338461 kubeadm.go:319] 
	I0110 02:26:57.353278  338461 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:26:57.353288  338461 kubeadm.go:319] 
	I0110 02:26:57.353328  338461 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:26:57.353333  338461 kubeadm.go:319] 
	I0110 02:26:57.353390  338461 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:26:57.353464  338461 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:26:57.353532  338461 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:26:57.353544  338461 kubeadm.go:319] 
	I0110 02:26:57.353648  338461 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:26:57.353744  338461 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:26:57.353752  338461 kubeadm.go:319] 
	I0110 02:26:57.353880  338461 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1ffugu.jcse9fz4pyvkzq7m \
	I0110 02:26:57.354051  338461 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 \
	I0110 02:26:57.354081  338461 kubeadm.go:319] 	--control-plane 
	I0110 02:26:57.354090  338461 kubeadm.go:319] 
	I0110 02:26:57.354183  338461 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:26:57.354189  338461 kubeadm.go:319] 
	I0110 02:26:57.354317  338461 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1ffugu.jcse9fz4pyvkzq7m \
	I0110 02:26:57.354475  338461 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 
	I0110 02:26:57.356934  338461 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0110 02:26:57.357052  338461 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:26:57.357076  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:57.357083  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:57.358426  338461 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:26:57.359425  338461 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:26:57.363469  338461 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:26:57.363485  338461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:26:57.376178  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:26:57.578386  338461 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:26:57.578562  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:57.578594  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-843779 minikube.k8s.io/updated_at=2026_01_10T02_26_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=newest-cni-843779 minikube.k8s.io/primary=true
	I0110 02:26:57.588227  338461 ops.go:34] apiserver oom_adj: -16
	I0110 02:26:57.657708  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:58.158330  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:58.658496  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:59.158389  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:59.657973  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:00.158625  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:00.657790  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:01.158769  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:01.658356  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:02.157852  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:02.222922  338461 kubeadm.go:1114] duration metric: took 4.644421285s to wait for elevateKubeSystemPrivileges
	I0110 02:27:02.222958  338461 kubeadm.go:403] duration metric: took 11.916375506s to StartCluster
	I0110 02:27:02.222979  338461 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:02.223054  338461 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:02.224315  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:02.224602  338461 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:27:02.224625  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:27:02.224720  338461 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:27:02.224804  338461 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-843779"
	I0110 02:27:02.224819  338461 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-843779"
	I0110 02:27:02.224830  338461 addons.go:70] Setting default-storageclass=true in profile "newest-cni-843779"
	I0110 02:27:02.224854  338461 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:02.224854  338461 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-843779"
	I0110 02:27:02.224821  338461 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:02.225428  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.225523  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.226702  338461 out.go:179] * Verifying Kubernetes components...
	I0110 02:27:02.227842  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:02.250197  338461 addons.go:239] Setting addon default-storageclass=true in "newest-cni-843779"
	I0110 02:27:02.250249  338461 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:02.250829  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.251301  338461 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:27:02.252449  338461 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:02.252463  338461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:27:02.252502  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:02.281535  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:02.282356  338461 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:02.282376  338461 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:27:02.282444  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:02.310280  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:02.321999  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:27:02.376661  338461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:27:02.399734  338461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:02.431196  338461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:02.524677  338461 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0110 02:27:02.525738  338461 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:27:02.525801  338461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:27:02.735965  338461 api_server.go:72] duration metric: took 511.326051ms to wait for apiserver process to appear ...
	I0110 02:27:02.735993  338461 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:27:02.736010  338461 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:02.742014  338461 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:27:02.743062  338461 api_server.go:141] control plane version: v1.35.0
	I0110 02:27:02.743090  338461 api_server.go:131] duration metric: took 7.089818ms to wait for apiserver health ...
	I0110 02:27:02.743103  338461 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:27:02.745969  338461 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 02:27:02.746876  338461 system_pods.go:59] 8 kube-system pods found
	I0110 02:27:02.746927  338461 system_pods.go:61] "coredns-7d764666f9-zmtqf" [bab0ce6c-6845-4a76-aba8-25902122e535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:02.746940  338461 system_pods.go:61] "etcd-newest-cni-843779" [fdd4d85a-8248-4455-82c1-256311f58e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:27:02.746952  338461 system_pods.go:61] "kindnet-p5kwz" [a4006850-95c0-4567-9f85-7914adcf599d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:27:02.746962  338461 system_pods.go:61] "kube-apiserver-newest-cni-843779" [6c2775ff-47fa-4806-9434-1cf525435963] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:27:02.746972  338461 system_pods.go:61] "kube-controller-manager-newest-cni-843779" [3d61a2c1-6564-4d15-9c8b-1eaefd4c6878] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:27:02.746986  338461 system_pods.go:61] "kube-proxy-9djhz" [d97a2a8a-cfa4-414f-ad6d-47af95479498] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:27:02.747020  338461 system_pods.go:61] "kube-scheduler-newest-cni-843779" [b6848e97-9fd5-4a56-b28d-0f581cc698b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:27:02.747038  338461 system_pods.go:61] "storage-provisioner" [4f1dd65f-c7de-48ab-8d72-fcc925bbd6be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:02.747051  338461 system_pods.go:74] duration metric: took 3.939591ms to wait for pod list to return data ...
	I0110 02:27:02.747065  338461 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:27:02.747347  338461 addons.go:530] duration metric: took 522.62294ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:27:02.749774  338461 default_sa.go:45] found service account: "default"
	I0110 02:27:02.749806  338461 default_sa.go:55] duration metric: took 2.7205ms for default service account to be created ...
	I0110 02:27:02.749819  338461 kubeadm.go:587] duration metric: took 525.183285ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:27:02.749838  338461 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:27:02.752066  338461 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:27:02.752090  338461 node_conditions.go:123] node cpu capacity is 8
	I0110 02:27:02.752105  338461 node_conditions.go:105] duration metric: took 2.261829ms to run NodePressure ...
	I0110 02:27:02.752117  338461 start.go:242] waiting for startup goroutines ...
	I0110 02:27:03.028702  338461 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-843779" context rescaled to 1 replicas
	I0110 02:27:03.028738  338461 start.go:247] waiting for cluster config update ...
	I0110 02:27:03.028749  338461 start.go:256] writing updated cluster config ...
	I0110 02:27:03.029031  338461 ssh_runner.go:195] Run: rm -f paused
	I0110 02:27:03.083609  338461 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:27:03.085383  338461 out.go:179] * Done! kubectl is now configured to use "newest-cni-843779" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:26:37 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:37.347003275Z" level=info msg="Started container" PID=1804 containerID=9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper id=465914cb-7da5-4b4b-9182-4d19a99907c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41c3f2b68ffd6a9d5178b12b198f8d740b6707f9b5b2d6908989b81b6c84c5fb
	Jan 10 02:26:38 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:38.161778024Z" level=info msg="Removing container: 21fbadce3c7070801e0912c88760563d7e998b439c38197de4b5b9a8bdf3ce33" id=af87359e-a599-4eca-8425-c797ac9e6757 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:38 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:38.171160956Z" level=info msg="Removed container 21fbadce3c7070801e0912c88760563d7e998b439c38197de4b5b9a8bdf3ce33: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper" id=af87359e-a599-4eca-8425-c797ac9e6757 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.19125691Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=abb3732d-abc7-443d-b272-dff4a6299fb1 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.192637314Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=64943090-48a8-4dd4-a0f0-0e248d6e426b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.19404382Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b5205294-34d5-4b6c-928c-a8e3a942d85c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.194187909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.199947048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.200164022Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5224d0880c9c301eec779f03d4d81c80f85bf1cc0ddbb1597d76c32f189d5d9d/merged/etc/passwd: no such file or directory"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.200310075Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5224d0880c9c301eec779f03d4d81c80f85bf1cc0ddbb1597d76c32f189d5d9d/merged/etc/group: no such file or directory"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.200872578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.232724648Z" level=info msg="Created container f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f: kube-system/storage-provisioner/storage-provisioner" id=b5205294-34d5-4b6c-928c-a8e3a942d85c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.233409017Z" level=info msg="Starting container: f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f" id=4fd40f79-62e9-4e70-8569-408c20cd07c5 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.235506225Z" level=info msg="Started container" PID=1818 containerID=f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f description=kube-system/storage-provisioner/storage-provisioner id=4fd40f79-62e9-4e70-8569-408c20cd07c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67820aefb2c369a2b0f2e00aa163bb93638de0ad900d5d5e4da178b1f4bd92be
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.076596171Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d3c1e8a9-06b2-4a75-a4c5-c639816c3bc8 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.07981243Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=17415710-0156-4fe9-ae7c-2bbca051e2d8 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.080878077Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper" id=14f98366-8cb9-439a-9a2e-a608fa035eb5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.081059085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.087190206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.087742614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.115248556Z" level=info msg="Created container d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper" id=14f98366-8cb9-439a-9a2e-a608fa035eb5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.115757281Z" level=info msg="Starting container: d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81" id=85febb7f-df25-4f95-946c-bf7b2c47f0c0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.117734195Z" level=info msg="Started container" PID=1855 containerID=d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper id=85febb7f-df25-4f95-946c-bf7b2c47f0c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41c3f2b68ffd6a9d5178b12b198f8d740b6707f9b5b2d6908989b81b6c84c5fb
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.2231911Z" level=info msg="Removing container: 9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7" id=fbeae63c-6884-4e33-b26e-401f2e8aa14f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.231827449Z" level=info msg="Removed container 9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper" id=fbeae63c-6884-4e33-b26e-401f2e8aa14f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d9b4206d7f0ac       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   41c3f2b68ffd6       dashboard-metrics-scraper-867fb5f87b-shxh2             kubernetes-dashboard
	f9c6c31df7faa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   67820aefb2c36       storage-provisioner                                    kube-system
	2f752a6224d46       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   e68152f7c1559       kubernetes-dashboard-b84665fb8-cvzmq                   kubernetes-dashboard
	e9c87ab85de9c       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           48 seconds ago      Running             coredns                     0                   840a1753e8c35       coredns-7d764666f9-rhgg5                               kube-system
	8f9e78594e45a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   352555fb853f6       busybox                                                default
	85a1be9712215       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           48 seconds ago      Running             kindnet-cni                 0                   fbc02dec624c5       kindnet-wbscw                                          kube-system
	998034535f5da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   67820aefb2c36       storage-provisioner                                    kube-system
	5a6b196ace135       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           48 seconds ago      Running             kube-proxy                  0                   31e5ac899a7f5       kube-proxy-6dcdf                                       kube-system
	35cfd8caca1ff       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           51 seconds ago      Running             kube-scheduler              0                   8acf12c49406c       kube-scheduler-default-k8s-diff-port-313784            kube-system
	fc29eda71f4bd       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           51 seconds ago      Running             kube-controller-manager     0                   9d111be729ca8       kube-controller-manager-default-k8s-diff-port-313784   kube-system
	b5de7f05c48c0       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           51 seconds ago      Running             etcd                        0                   70726124463f1       etcd-default-k8s-diff-port-313784                      kube-system
	6f7b3a029a3bc       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           51 seconds ago      Running             kube-apiserver              0                   e1893257de94c       kube-apiserver-default-k8s-diff-port-313784            kube-system
	
	
	==> coredns [e9c87ab85de9c59e7f2a0e811771f9c88502d8f5dbd60ccfb4eecf174cee932f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48099 - 42204 "HINFO IN 3807732134652601533.6838935619091275238. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071820786s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-313784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-313784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=default-k8s-diff-port-313784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:25:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-313784
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:47 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:47 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:47 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:26:47 +0000   Sat, 10 Jan 2026 02:25:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-313784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                eaef45ee-c0a4-4074-89b2-25c5e6ae4f6a
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-7d764666f9-rhgg5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-313784                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-wbscw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-313784             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-313784    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-6dcdf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-313784             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-shxh2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cvzmq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  104s  node-controller  Node default-k8s-diff-port-313784 event: Registered Node default-k8s-diff-port-313784 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node default-k8s-diff-port-313784 event: Registered Node default-k8s-diff-port-313784 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [b5de7f05c48c095e9fef4efb74abefe8eb07be5b286dca9f1e02db1c8c79c371] <==
	{"level":"info","ts":"2026-01-10T02:26:14.639275Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:26:14.639348Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T02:26:14.639422Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T02:26:14.639501Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2026-01-10T02:26:14.638808Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"dfc97eb0aae75b33","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-10T02:26:14.639596Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:26:14.641037Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:26:15.529932Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:15.529977Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:15.530018Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:15.530028Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:26:15.530042Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:26:15.530720Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T02:26:15.530738Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:26:15.530753Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:26:15.530759Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T02:26:15.531396Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:26:15.531429Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:26:15.531394Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:default-k8s-diff-port-313784 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:26:15.531706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:26:15.531730Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:26:15.533213Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:26:15.533505Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:26:15.535746Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:26:15.535844Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 02:27:06 up  1:09,  0 user,  load average: 2.84, 3.34, 2.36
	Linux default-k8s-diff-port-313784 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [85a1be97122158afba1c7dc996f622d1063cec041014b0cc8bbe1c378ba119d4] <==
	I0110 02:26:17.701734       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:26:17.701976       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 02:26:17.702159       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:26:17.702184       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:26:17.702201       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:26:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:26:17.901530       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:26:17.901593       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:26:17.901606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:26:18.002170       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:26:18.402380       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:26:18.402409       1 metrics.go:72] Registering metrics
	I0110 02:26:18.402476       1 controller.go:711] "Syncing nftables rules"
	I0110 02:26:27.901658       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:26:27.901721       1 main.go:301] handling current node
	I0110 02:26:37.902141       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:26:37.902201       1 main.go:301] handling current node
	I0110 02:26:47.902198       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:26:47.902251       1 main.go:301] handling current node
	I0110 02:26:57.902322       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:26:57.902364       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f7b3a029a3bc4ba4e3633368af6270be9e6945d669d649d76e7070308610a5d] <==
	I0110 02:26:16.477203       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:26:16.477214       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:26:16.477104       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 02:26:16.477299       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:26:16.477311       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:26:16.477347       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:26:16.477372       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:26:16.477640       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 02:26:16.477668       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:26:16.479961       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:16.484222       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0110 02:26:16.485147       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:26:16.494350       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 02:26:16.512793       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:26:16.718432       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:26:16.744818       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:26:16.760135       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:26:16.767615       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:26:16.774265       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:26:16.804677       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.80.27"}
	I0110 02:26:16.814275       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.205.23"}
	I0110 02:26:17.382463       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:26:20.074595       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:26:20.273381       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:26:20.324051       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fc29eda71f4bde30696f3da25f43c0e08c5a51d939a947924ad7303cd468a80f] <==
	I0110 02:26:19.624656       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.624657       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.624667       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.624790       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.624671       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625577       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625617       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625684       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625802       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625832       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625928       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625938       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625960       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625986       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625803       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.626369       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.626369       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.627392       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.635710       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:26:19.724140       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.724157       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:26:19.724163       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:26:19.736293       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:20.277385       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0110 02:26:20.277478       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [5a6b196ace1351b1c9640bb3a22624c7d34f7250b1aa608bdb4d91bcb09f31b4] <==
	I0110 02:26:17.473849       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:26:17.556029       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:26:17.657062       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:17.657105       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0110 02:26:17.657212       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:26:17.677776       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:26:17.677822       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:26:17.683630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:26:17.684091       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:26:17.684128       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:26:17.685628       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:26:17.685649       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:26:17.685697       1 config.go:200] "Starting service config controller"
	I0110 02:26:17.685713       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:26:17.685719       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:26:17.685736       1 config.go:309] "Starting node config controller"
	I0110 02:26:17.685744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:26:17.685750       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:26:17.685719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:26:17.786067       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:26:17.786171       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:26:17.786168       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [35cfd8caca1ffb3ed069875a6f4df02737c571e205d4cb57ddce696a7018cd87] <==
	I0110 02:26:15.227400       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:26:16.413904       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:26:16.413936       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:26:16.413949       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:26:16.413958       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:26:16.446494       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:26:16.446596       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:26:16.449388       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:26:16.449419       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:26:16.449511       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:26:16.449556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:26:16.550973       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:26:29 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:29.137975     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-shxh2_kubernetes-dashboard(63345db1-2d3f-4c44-9a14-c2bade7afb21)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" podUID="63345db1-2d3f-4c44-9a14-c2bade7afb21"
	Jan 10 02:26:32 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:32.463318     743 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-313784" containerName="kube-apiserver"
	Jan 10 02:26:33 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:33.147169     743 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-313784" containerName="kube-apiserver"
	Jan 10 02:26:37 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:37.307779     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:37 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:37.307827     743 scope.go:122] "RemoveContainer" containerID="21fbadce3c7070801e0912c88760563d7e998b439c38197de4b5b9a8bdf3ce33"
	Jan 10 02:26:38 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:38.160479     743 scope.go:122] "RemoveContainer" containerID="21fbadce3c7070801e0912c88760563d7e998b439c38197de4b5b9a8bdf3ce33"
	Jan 10 02:26:38 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:38.160738     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:38 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:38.160776     743 scope.go:122] "RemoveContainer" containerID="9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7"
	Jan 10 02:26:38 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:38.161002     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-shxh2_kubernetes-dashboard(63345db1-2d3f-4c44-9a14-c2bade7afb21)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" podUID="63345db1-2d3f-4c44-9a14-c2bade7afb21"
	Jan 10 02:26:47 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:47.308134     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:47 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:47.308179     743 scope.go:122] "RemoveContainer" containerID="9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7"
	Jan 10 02:26:47 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:47.308397     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-shxh2_kubernetes-dashboard(63345db1-2d3f-4c44-9a14-c2bade7afb21)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" podUID="63345db1-2d3f-4c44-9a14-c2bade7afb21"
	Jan 10 02:26:48 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:48.190740     743 scope.go:122] "RemoveContainer" containerID="998034535f5da2818ee887132648e0f2c4ce8e2dd9984530238973083e214dad"
	Jan 10 02:26:49 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:49.418819     743 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-rhgg5" containerName="coredns"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: E0110 02:27:00.075996     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: I0110 02:27:00.076056     743 scope.go:122] "RemoveContainer" containerID="9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: I0110 02:27:00.221857     743 scope.go:122] "RemoveContainer" containerID="9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: E0110 02:27:00.222096     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: I0110 02:27:00.222135     743 scope.go:122] "RemoveContainer" containerID="d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: E0110 02:27:00.222356     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-shxh2_kubernetes-dashboard(63345db1-2d3f-4c44-9a14-c2bade7afb21)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" podUID="63345db1-2d3f-4c44-9a14-c2bade7afb21"
	Jan 10 02:27:03 default-k8s-diff-port-313784 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:27:03 default-k8s-diff-port-313784 kubelet[743]: I0110 02:27:03.125752     743 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:27:03 default-k8s-diff-port-313784 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:27:03 default-k8s-diff-port-313784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:27:03 default-k8s-diff-port-313784 systemd[1]: kubelet.service: Consumed 1.601s CPU time.
	
	
	==> kubernetes-dashboard [2f752a6224d46906bbadd0c1a12d9b82fc4244b9f4c554a86ec2fefa82fb86f8] <==
	2026/01/10 02:26:24 Starting overwatch
	2026/01/10 02:26:24 Using namespace: kubernetes-dashboard
	2026/01/10 02:26:24 Using in-cluster config to connect to apiserver
	2026/01/10 02:26:24 Using secret token for csrf signing
	2026/01/10 02:26:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:26:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:26:24 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:26:24 Generating JWE encryption key
	2026/01/10 02:26:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:26:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:26:24 Initializing JWE encryption key from synchronized object
	2026/01/10 02:26:24 Creating in-cluster Sidecar client
	2026/01/10 02:26:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:26:24 Serving insecurely on HTTP port: 9090
	2026/01/10 02:26:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [998034535f5da2818ee887132648e0f2c4ce8e2dd9984530238973083e214dad] <==
	I0110 02:26:17.442180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:26:47.446289       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f] <==
	I0110 02:26:48.249643       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:26:48.257248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:26:48.257291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:26:48.260048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:51.714814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:55.975567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:59.573991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:02.628333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:05.650841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:05.655834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:27:05.656031       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:27:05.656238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313784_49a694cf-c504-4e2a-9dd4-6ff4e77429db!
	I0110 02:27:05.656284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"156faf5c-a028-4a8b-8a5c-5f90c9b1d50d", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-313784_49a694cf-c504-4e2a-9dd4-6ff4e77429db became leader
	W0110 02:27:05.659205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:05.665012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:27:05.756459       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313784_49a694cf-c504-4e2a-9dd4-6ff4e77429db!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784: exit status 2 (334.347886ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-313784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-313784
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-313784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85",
	        "Created": "2026-01-10T02:25:05.094879814Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:26:08.052047922Z",
	            "FinishedAt": "2026-01-10T02:26:06.826792406Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/hostname",
	        "HostsPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/hosts",
	        "LogPath": "/var/lib/docker/containers/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85/40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85-json.log",
	        "Name": "/default-k8s-diff-port-313784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-313784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-313784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40f734d8ee9e68d206798d6652e90c5e64465c6f9e52884bf996165d99516e85",
	                "LowerDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/134fe433bfa97c0d56ecaf13fe01f9e70fd1a3cabbcb76846ffb05484514084e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-313784",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-313784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-313784",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-313784",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-313784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1ca499599115196c3b145215ebbbb6f40a13d0fce5a9186a5856403c4249e129",
	            "SandboxKey": "/var/run/docker/netns/1ca499599115",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-313784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0894fcffb6ef151230a0e511493b85c03422956c47ed99558a627394939589f6",
	                    "EndpointID": "c38b724ed5e5e001b005e4b75dc23f70bf8545d6e0e76ca20a664e9a9fbb9551",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f6:57:33:29:ff:72",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-313784",
	                        "40f734d8ee9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784: exit status 2 (311.808537ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-313784 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-313784 logs -n 25: (1.02481208s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-313784 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ embed-certs-872415 image list --format=json                                                                                                                                                                                                   │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p embed-certs-872415 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:27 UTC │
	│ image   │ no-preload-190877 image list --format=json                                                                                                                                                                                                    │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p no-preload-190877 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ default-k8s-diff-port-313784 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ pause   │ -p default-k8s-diff-port-313784 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ stop    │ -p newest-cni-843779 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:38.395701  338461 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:38.395954  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.395962  338461 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:38.395966  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.396156  338461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:38.396626  338461 out.go:368] Setting JSON to false
	I0110 02:26:38.397992  338461 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4147,"bootTime":1768007851,"procs":455,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:38.398046  338461 start.go:143] virtualization: kvm guest
	I0110 02:26:38.399795  338461 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:38.400823  338461 notify.go:221] Checking for updates...
	I0110 02:26:38.400839  338461 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:38.401952  338461 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:38.403142  338461 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:38.404397  338461 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:38.405512  338461 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:38.406412  338461 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:38.407953  338461 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408047  338461 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408138  338461 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408217  338461 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:38.434056  338461 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:38.434192  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.492093  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.480726897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.492192  338461 docker.go:319] overlay module found
	I0110 02:26:38.493713  338461 out.go:179] * Using the docker driver based on user configuration
	I0110 02:26:38.494702  338461 start.go:309] selected driver: docker
	I0110 02:26:38.494716  338461 start.go:928] validating driver "docker" against <nil>
	I0110 02:26:38.494729  338461 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:38.495359  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.549669  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.540019441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.549849  338461 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 02:26:38.549882  338461 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 02:26:38.550158  338461 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:26:38.552024  338461 out.go:179] * Using Docker driver with root privileges
	I0110 02:26:38.553057  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:38.553113  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:38.553122  338461 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:26:38.553168  338461 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:38.554252  338461 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:26:38.555155  338461 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:38.556242  338461 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:38.557247  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:38.557276  338461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:38.557288  338461 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:38.557342  338461 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:38.557382  338461 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:38.557395  338461 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:38.557518  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:38.557546  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json: {Name:mk980e5e7d4c45bf0d1bdc96021cfe1dfa9563b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:38.578353  338461 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:38.578368  338461 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:38.578383  338461 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:38.578406  338461 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:38.578491  338461 start.go:364] duration metric: took 71.777µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:26:38.578513  338461 start.go:93] Provisioning new machine with config: &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:38.578574  338461 start.go:125] createHost starting for "" (driver="docker")
	W0110 02:26:37.984376  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:40.485189  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:38.579999  338461 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:26:38.580204  338461 start.go:159] libmachine.API.Create for "newest-cni-843779" (driver="docker")
	I0110 02:26:38.580227  338461 client.go:173] LocalClient.Create starting
	I0110 02:26:38.580292  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 02:26:38.580322  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580343  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580394  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 02:26:38.580414  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580432  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580717  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:26:38.596966  338461 cli_runner.go:211] docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:26:38.597028  338461 network_create.go:284] running [docker network inspect newest-cni-843779] to gather additional debugging logs...
	I0110 02:26:38.597049  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779
	W0110 02:26:38.613182  338461 cli_runner.go:211] docker network inspect newest-cni-843779 returned with exit code 1
	I0110 02:26:38.613209  338461 network_create.go:287] error running [docker network inspect newest-cni-843779]: docker network inspect newest-cni-843779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-843779 not found
	I0110 02:26:38.613225  338461 network_create.go:289] output of [docker network inspect newest-cni-843779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-843779 not found
	
	** /stderr **
	I0110 02:26:38.613341  338461 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:38.630396  338461 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
	I0110 02:26:38.631029  338461 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6b93c57cdce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:4c:65:68:38:06} reservation:<nil>}
	I0110 02:26:38.631780  338461 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c494a40b219 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:38:5d:78:96:da} reservation:<nil>}
	I0110 02:26:38.632287  338461 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a77220e3dd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:16:c1:44:08:5d} reservation:<nil>}
	I0110 02:26:38.633099  338461 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea9360}
	I0110 02:26:38.633118  338461 network_create.go:124] attempt to create docker network newest-cni-843779 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:26:38.633156  338461 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-843779 newest-cni-843779
	I0110 02:26:38.681030  338461 network_create.go:108] docker network newest-cni-843779 192.168.85.0/24 created
	I0110 02:26:38.681058  338461 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-843779" container
	I0110 02:26:38.681110  338461 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:26:38.698815  338461 cli_runner.go:164] Run: docker volume create newest-cni-843779 --label name.minikube.sigs.k8s.io=newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:26:38.715947  338461 oci.go:103] Successfully created a docker volume newest-cni-843779
	I0110 02:26:38.716014  338461 cli_runner.go:164] Run: docker run --rm --name newest-cni-843779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --entrypoint /usr/bin/test -v newest-cni-843779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:26:39.139879  338461 oci.go:107] Successfully prepared a docker volume newest-cni-843779
	I0110 02:26:39.139985  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:39.140001  338461 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:26:39.140074  338461 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:26:43.148608  338461 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.00849465s)
	I0110 02:26:43.148642  338461 kic.go:203] duration metric: took 4.008637849s to extract preloaded images to volume ...
	W0110 02:26:43.148739  338461 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 02:26:43.148767  338461 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 02:26:43.148804  338461 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:26:43.204668  338461 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-843779 --name newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-843779 --network newest-cni-843779 --ip 192.168.85.2 --volume newest-cni-843779:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	W0110 02:26:42.983710  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:44.983765  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:46.984713  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:43.527936  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Running}}
	I0110 02:26:43.548293  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.567102  338461 cli_runner.go:164] Run: docker exec newest-cni-843779 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:26:43.613558  338461 oci.go:144] the created container "newest-cni-843779" has a running status.
	I0110 02:26:43.613590  338461 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa...
	I0110 02:26:43.684437  338461 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:26:43.713852  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.736219  338461 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:26:43.736257  338461 kic_runner.go:114] Args: [docker exec --privileged newest-cni-843779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:26:43.785594  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.805775  338461 machine.go:94] provisionDockerMachine start ...
	I0110 02:26:43.805896  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:43.831840  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:43.832223  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:43.832251  338461 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:26:43.833032  338461 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35784->127.0.0.1:33130: read: connection reset by peer
	I0110 02:26:46.969499  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:26:46.969526  338461 ubuntu.go:182] provisioning hostname "newest-cni-843779"
	I0110 02:26:46.969593  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:46.991696  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:46.992031  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:46.992054  338461 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-843779 && echo "newest-cni-843779" | sudo tee /etc/hostname
	I0110 02:26:47.136043  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:26:47.136128  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.157826  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:47.158110  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:47.158139  338461 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-843779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-843779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-843779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:26:47.285266  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:26:47.285296  338461 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:26:47.285326  338461 ubuntu.go:190] setting up certificates
	I0110 02:26:47.285339  338461 provision.go:84] configureAuth start
	I0110 02:26:47.285388  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:47.306123  338461 provision.go:143] copyHostCerts
	I0110 02:26:47.306186  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:26:47.306200  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:26:47.306285  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:26:47.306444  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:26:47.306459  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:26:47.306503  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:26:47.306586  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:26:47.306597  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:26:47.306634  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:26:47.306711  338461 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.newest-cni-843779 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-843779]
	I0110 02:26:47.449507  338461 provision.go:177] copyRemoteCerts
	I0110 02:26:47.449566  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:26:47.449610  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.470425  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:47.566450  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:26:47.585229  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:26:47.602746  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:26:47.620541  338461 provision.go:87] duration metric: took 335.183446ms to configureAuth
	I0110 02:26:47.620570  338461 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:26:47.620817  338461 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:47.620959  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.640508  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:47.640816  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:47.640845  338461 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:26:47.907810  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:26:47.907838  338461 machine.go:97] duration metric: took 4.102037206s to provisionDockerMachine
	I0110 02:26:47.907850  338461 client.go:176] duration metric: took 9.327615152s to LocalClient.Create
	I0110 02:26:47.907873  338461 start.go:167] duration metric: took 9.327668738s to libmachine.API.Create "newest-cni-843779"
	I0110 02:26:47.907895  338461 start.go:293] postStartSetup for "newest-cni-843779" (driver="docker")
	I0110 02:26:47.907908  338461 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:26:47.907974  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:26:47.908018  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.928412  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.024000  338461 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:26:48.027481  338461 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:26:48.027509  338461 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:26:48.027520  338461 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:26:48.027567  338461 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:26:48.027683  338461 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:26:48.027841  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:26:48.035276  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:48.055326  338461 start.go:296] duration metric: took 147.417971ms for postStartSetup
	I0110 02:26:48.055713  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:48.075567  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:48.075921  338461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:26:48.075971  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.097098  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.195147  338461 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:26:48.201200  338461 start.go:128] duration metric: took 9.622613291s to createHost
	I0110 02:26:48.201223  338461 start.go:83] releasing machines lock for "newest-cni-843779", held for 9.622720302s
	I0110 02:26:48.201284  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:48.220675  338461 ssh_runner.go:195] Run: cat /version.json
	I0110 02:26:48.220716  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.220775  338461 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:26:48.220842  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.243579  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.243844  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.405575  338461 ssh_runner.go:195] Run: systemctl --version
	I0110 02:26:48.411977  338461 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:26:48.446783  338461 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:26:48.451861  338461 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:26:48.451946  338461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:26:48.478187  338461 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0110 02:26:48.478210  338461 start.go:496] detecting cgroup driver to use...
	I0110 02:26:48.478243  338461 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:26:48.478288  338461 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:26:48.496294  338461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:26:48.508994  338461 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:26:48.509050  338461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:26:48.526619  338461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:26:48.546200  338461 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:26:48.630754  338461 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:26:48.721548  338461 docker.go:234] disabling docker service ...
	I0110 02:26:48.721596  338461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:26:48.741103  338461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:26:48.754750  338461 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:26:48.849106  338461 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:26:48.926371  338461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:26:48.938571  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:26:48.953463  338461 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:26:48.953530  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.967831  338461 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:26:48.967929  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.981096  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.994270  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.003708  338461 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:26:49.012357  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.021802  338461 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.034747  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.043418  338461 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:26:49.050386  338461 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:26:49.057269  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:49.130961  338461 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:26:49.285916  338461 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:26:49.285981  338461 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:26:49.289691  338461 start.go:574] Will wait 60s for crictl version
	I0110 02:26:49.289750  338461 ssh_runner.go:195] Run: which crictl
	I0110 02:26:49.293070  338461 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:26:49.316456  338461 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:26:49.316525  338461 ssh_runner.go:195] Run: crio --version
	I0110 02:26:49.343597  338461 ssh_runner.go:195] Run: crio --version
	I0110 02:26:49.371114  338461 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:26:49.372159  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:49.389573  338461 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:26:49.393453  338461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:49.404679  338461 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:26:49.405677  338461 kubeadm.go:884] updating cluster {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:26:49.405793  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:49.405837  338461 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:49.440734  338461 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:49.440758  338461 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:26:49.440812  338461 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:49.469164  338461 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:49.469186  338461 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:26:49.469194  338461 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:26:49.469275  338461 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-843779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:26:49.469338  338461 ssh_runner.go:195] Run: crio config
	I0110 02:26:49.516476  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:49.516496  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:49.516510  338461 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:26:49.516530  338461 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-843779 NodeName:newest-cni-843779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:26:49.516639  338461 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-843779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:26:49.516699  338461 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:26:49.524516  338461 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:26:49.524573  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:26:49.532047  338461 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:26:49.543799  338461 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:26:49.557580  338461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 02:26:49.569161  338461 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:26:49.572423  338461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:49.581744  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:49.662065  338461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:26:49.689947  338461 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779 for IP: 192.168.85.2
	I0110 02:26:49.689968  338461 certs.go:195] generating shared ca certs ...
	I0110 02:26:49.689987  338461 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.690118  338461 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:26:49.690155  338461 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:26:49.690165  338461 certs.go:257] generating profile certs ...
	I0110 02:26:49.690213  338461 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key
	I0110 02:26:49.690230  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt with IP's: []
	I0110 02:26:49.756357  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt ...
	I0110 02:26:49.756381  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt: {Name:mk133e41b9f631c1d31398329e120a6d2e8c733e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.756536  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key ...
	I0110 02:26:49.756548  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key: {Name:mk1a6751a5bfd0db1a5029ef4003e6943a863573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.756626  338461 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5
	I0110 02:26:49.756641  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:26:49.820417  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 ...
	I0110 02:26:49.820450  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5: {Name:mk37a665bf86ff3fb7ea7a72608ed18515127576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.820601  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5 ...
	I0110 02:26:49.820613  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5: {Name:mkcfe121d9bb2cde5a393290decd7e10f53e5ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.820724  338461 certs.go:382] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt
	I0110 02:26:49.820839  338461 certs.go:386] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5 -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key
	I0110 02:26:49.820918  338461 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key
	I0110 02:26:49.820934  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt with IP's: []
	I0110 02:26:49.878096  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt ...
	I0110 02:26:49.878116  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt: {Name:mk7fbd29eafac26d0fd2ce98341bca7262aa29d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.878239  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key ...
	I0110 02:26:49.878251  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key: {Name:mk6eed83d5bd4bb5410d906db9b88c82acb84bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.878412  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:26:49.878451  338461 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:26:49.878461  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:26:49.878484  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:26:49.878507  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:26:49.878530  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:26:49.878568  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:49.879163  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:26:49.896970  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:26:49.913364  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:26:49.930220  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:26:49.947718  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:26:49.965127  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:26:49.984031  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:26:50.002271  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:26:50.020474  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:26:50.039132  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:26:50.055600  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:26:50.075592  338461 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:26:50.090453  338461 ssh_runner.go:195] Run: openssl version
	I0110 02:26:50.096522  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.103714  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:26:50.111443  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.115011  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.115064  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.153459  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:26:50.160763  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14086.pem /etc/ssl/certs/51391683.0
	I0110 02:26:50.168017  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.174985  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:26:50.182015  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.185610  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.185650  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.221124  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:26:50.228348  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/140862.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:26:50.235190  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.242739  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:26:50.249485  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.252727  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.252768  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.288639  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:26:50.296037  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:26:50.303287  338461 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:26:50.306535  338461 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:26:50.306586  338461 kubeadm.go:401] StartCluster: {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:50.306674  338461 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:26:50.306717  338461 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:26:50.333629  338461 cri.go:96] found id: ""
	I0110 02:26:50.333685  338461 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:26:50.341547  338461 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:26:50.349445  338461 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:26:50.349507  338461 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:26:50.360715  338461 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:26:50.360732  338461 kubeadm.go:158] found existing configuration files:
	
	I0110 02:26:50.360788  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:26:50.386174  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:26:50.386241  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:26:50.395383  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:26:50.410044  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:26:50.410107  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:26:50.418596  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:26:50.428393  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:26:50.428444  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:26:50.436482  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:26:50.444188  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:26:50.444240  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:26:50.452657  338461 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:26:50.494415  338461 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:26:50.494503  338461 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:26:50.563724  338461 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:26:50.563829  338461 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0110 02:26:50.563876  338461 kubeadm.go:319] OS: Linux
	I0110 02:26:50.563960  338461 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:26:50.564045  338461 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:26:50.564142  338461 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:26:50.564234  338461 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:26:50.564307  338461 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:26:50.564383  338461 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:26:50.564454  338461 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:26:50.564517  338461 kubeadm.go:319] CGROUPS_IO: enabled
	I0110 02:26:50.622544  338461 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:26:50.622713  338461 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:26:50.622878  338461 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:26:50.630207  338461 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0110 02:26:48.986976  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:49.484087  333054 pod_ready.go:94] pod "coredns-7d764666f9-rhgg5" is "Ready"
	I0110 02:26:49.484114  333054 pod_ready.go:86] duration metric: took 31.505548695s for pod "coredns-7d764666f9-rhgg5" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.486734  333054 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.490527  333054 pod_ready.go:94] pod "etcd-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.490552  333054 pod_ready.go:86] duration metric: took 3.797789ms for pod "etcd-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.492326  333054 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.495598  333054 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.495619  333054 pod_ready.go:86] duration metric: took 3.274816ms for pod "kube-apiserver-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.499483  333054 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.682853  333054 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.682877  333054 pod_ready.go:86] duration metric: took 183.376493ms for pod "kube-controller-manager-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.882938  333054 pod_ready.go:83] waiting for pod "kube-proxy-6dcdf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.283112  333054 pod_ready.go:94] pod "kube-proxy-6dcdf" is "Ready"
	I0110 02:26:50.283137  333054 pod_ready.go:86] duration metric: took 400.175094ms for pod "kube-proxy-6dcdf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.483261  333054 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.883249  333054 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:50.883281  333054 pod_ready.go:86] duration metric: took 399.994421ms for pod "kube-scheduler-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.883295  333054 pod_ready.go:40] duration metric: took 32.90814338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:50.927315  333054 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:26:50.939799  333054 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-313784" cluster and "default" namespace by default
	I0110 02:26:50.638660  338461 out.go:252]   - Generating certificates and keys ...
	I0110 02:26:50.638761  338461 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:26:50.638847  338461 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:26:50.748673  338461 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:26:50.808044  338461 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:26:50.825988  338461 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:26:51.053436  338461 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:26:51.189722  338461 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:26:51.189935  338461 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-843779] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:26:51.265783  338461 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:26:51.266038  338461 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-843779] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:26:51.453075  338461 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:26:51.463944  338461 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:26:51.492453  338461 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:26:51.492568  338461 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:26:51.639156  338461 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:26:51.713669  338461 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:26:51.737323  338461 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:26:51.771362  338461 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:26:51.803764  338461 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:26:51.804316  338461 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:26:51.807706  338461 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:26:51.809141  338461 out.go:252]   - Booting up control plane ...
	I0110 02:26:51.809224  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:26:51.809305  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:26:51.810160  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:26:51.836089  338461 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:26:51.836223  338461 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:26:51.843051  338461 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:26:51.843358  338461 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:26:51.843421  338461 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:26:51.940122  338461 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:26:51.940253  338461 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:26:52.440845  338461 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.777212ms
	I0110 02:26:52.443950  338461 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:26:52.444086  338461 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0110 02:26:52.444167  338461 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:26:52.444244  338461 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 02:26:53.449008  338461 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004897675s
	I0110 02:26:54.312692  338461 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.868609736s
	I0110 02:26:55.946035  338461 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501977175s
	I0110 02:26:55.963146  338461 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:26:55.973142  338461 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:26:55.980674  338461 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:26:55.980865  338461 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-843779 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:26:55.987964  338461 kubeadm.go:319] [bootstrap-token] Using token: 1ffugu.jcse9fz4pyvkzq7m
	I0110 02:26:55.989224  338461 out.go:252]   - Configuring RBAC rules ...
	I0110 02:26:55.989358  338461 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:26:55.992014  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:26:55.996727  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:26:55.998941  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:26:56.001013  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:26:56.003983  338461 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:26:56.351224  338461 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:26:56.766422  338461 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:26:57.351738  338461 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:26:57.352701  338461 kubeadm.go:319] 
	I0110 02:26:57.352806  338461 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:26:57.352825  338461 kubeadm.go:319] 
	I0110 02:26:57.352959  338461 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:26:57.352968  338461 kubeadm.go:319] 
	I0110 02:26:57.352998  338461 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:26:57.353086  338461 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:26:57.353183  338461 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:26:57.353201  338461 kubeadm.go:319] 
	I0110 02:26:57.353278  338461 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:26:57.353288  338461 kubeadm.go:319] 
	I0110 02:26:57.353328  338461 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:26:57.353333  338461 kubeadm.go:319] 
	I0110 02:26:57.353390  338461 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:26:57.353464  338461 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:26:57.353532  338461 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:26:57.353544  338461 kubeadm.go:319] 
	I0110 02:26:57.353648  338461 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:26:57.353744  338461 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:26:57.353752  338461 kubeadm.go:319] 
	I0110 02:26:57.353880  338461 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1ffugu.jcse9fz4pyvkzq7m \
	I0110 02:26:57.354051  338461 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 \
	I0110 02:26:57.354081  338461 kubeadm.go:319] 	--control-plane 
	I0110 02:26:57.354090  338461 kubeadm.go:319] 
	I0110 02:26:57.354183  338461 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:26:57.354189  338461 kubeadm.go:319] 
	I0110 02:26:57.354317  338461 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1ffugu.jcse9fz4pyvkzq7m \
	I0110 02:26:57.354475  338461 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 
	I0110 02:26:57.356934  338461 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0110 02:26:57.357052  338461 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:26:57.357076  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:57.357083  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:57.358426  338461 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:26:57.359425  338461 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:26:57.363469  338461 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:26:57.363485  338461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:26:57.376178  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:26:57.578386  338461 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:26:57.578562  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:57.578594  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-843779 minikube.k8s.io/updated_at=2026_01_10T02_26_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=newest-cni-843779 minikube.k8s.io/primary=true
	I0110 02:26:57.588227  338461 ops.go:34] apiserver oom_adj: -16
	I0110 02:26:57.657708  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:58.158330  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:58.658496  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:59.158389  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:59.657973  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:00.158625  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:00.657790  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:01.158769  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:01.658356  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:02.157852  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:02.222922  338461 kubeadm.go:1114] duration metric: took 4.644421285s to wait for elevateKubeSystemPrivileges
	I0110 02:27:02.222958  338461 kubeadm.go:403] duration metric: took 11.916375506s to StartCluster
	I0110 02:27:02.222979  338461 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:02.223054  338461 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:02.224315  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:02.224602  338461 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:27:02.224625  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:27:02.224720  338461 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:27:02.224804  338461 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-843779"
	I0110 02:27:02.224819  338461 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-843779"
	I0110 02:27:02.224830  338461 addons.go:70] Setting default-storageclass=true in profile "newest-cni-843779"
	I0110 02:27:02.224854  338461 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:02.224854  338461 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-843779"
	I0110 02:27:02.224821  338461 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:02.225428  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.225523  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.226702  338461 out.go:179] * Verifying Kubernetes components...
	I0110 02:27:02.227842  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:02.250197  338461 addons.go:239] Setting addon default-storageclass=true in "newest-cni-843779"
	I0110 02:27:02.250249  338461 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:02.250829  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.251301  338461 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:27:02.252449  338461 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:02.252463  338461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:27:02.252502  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:02.281535  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:02.282356  338461 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:02.282376  338461 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:27:02.282444  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:02.310280  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:02.321999  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:27:02.376661  338461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:27:02.399734  338461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:02.431196  338461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:02.524677  338461 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0110 02:27:02.525738  338461 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:27:02.525801  338461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:27:02.735965  338461 api_server.go:72] duration metric: took 511.326051ms to wait for apiserver process to appear ...
	I0110 02:27:02.735993  338461 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:27:02.736010  338461 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:02.742014  338461 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:27:02.743062  338461 api_server.go:141] control plane version: v1.35.0
	I0110 02:27:02.743090  338461 api_server.go:131] duration metric: took 7.089818ms to wait for apiserver health ...
	I0110 02:27:02.743103  338461 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:27:02.745969  338461 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 02:27:02.746876  338461 system_pods.go:59] 8 kube-system pods found
	I0110 02:27:02.746927  338461 system_pods.go:61] "coredns-7d764666f9-zmtqf" [bab0ce6c-6845-4a76-aba8-25902122e535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:02.746940  338461 system_pods.go:61] "etcd-newest-cni-843779" [fdd4d85a-8248-4455-82c1-256311f58e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:27:02.746952  338461 system_pods.go:61] "kindnet-p5kwz" [a4006850-95c0-4567-9f85-7914adcf599d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:27:02.746962  338461 system_pods.go:61] "kube-apiserver-newest-cni-843779" [6c2775ff-47fa-4806-9434-1cf525435963] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:27:02.746972  338461 system_pods.go:61] "kube-controller-manager-newest-cni-843779" [3d61a2c1-6564-4d15-9c8b-1eaefd4c6878] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:27:02.746986  338461 system_pods.go:61] "kube-proxy-9djhz" [d97a2a8a-cfa4-414f-ad6d-47af95479498] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:27:02.747020  338461 system_pods.go:61] "kube-scheduler-newest-cni-843779" [b6848e97-9fd5-4a56-b28d-0f581cc698b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:27:02.747038  338461 system_pods.go:61] "storage-provisioner" [4f1dd65f-c7de-48ab-8d72-fcc925bbd6be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:02.747051  338461 system_pods.go:74] duration metric: took 3.939591ms to wait for pod list to return data ...
	I0110 02:27:02.747065  338461 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:27:02.747347  338461 addons.go:530] duration metric: took 522.62294ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:27:02.749774  338461 default_sa.go:45] found service account: "default"
	I0110 02:27:02.749806  338461 default_sa.go:55] duration metric: took 2.7205ms for default service account to be created ...
	I0110 02:27:02.749819  338461 kubeadm.go:587] duration metric: took 525.183285ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:27:02.749838  338461 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:27:02.752066  338461 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:27:02.752090  338461 node_conditions.go:123] node cpu capacity is 8
	I0110 02:27:02.752105  338461 node_conditions.go:105] duration metric: took 2.261829ms to run NodePressure ...
	I0110 02:27:02.752117  338461 start.go:242] waiting for startup goroutines ...
	I0110 02:27:03.028702  338461 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-843779" context rescaled to 1 replicas
	I0110 02:27:03.028738  338461 start.go:247] waiting for cluster config update ...
	I0110 02:27:03.028749  338461 start.go:256] writing updated cluster config ...
	I0110 02:27:03.029031  338461 ssh_runner.go:195] Run: rm -f paused
	I0110 02:27:03.083609  338461 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:27:03.085383  338461 out.go:179] * Done! kubectl is now configured to use "newest-cni-843779" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:26:37 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:37.347003275Z" level=info msg="Started container" PID=1804 containerID=9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper id=465914cb-7da5-4b4b-9182-4d19a99907c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41c3f2b68ffd6a9d5178b12b198f8d740b6707f9b5b2d6908989b81b6c84c5fb
	Jan 10 02:26:38 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:38.161778024Z" level=info msg="Removing container: 21fbadce3c7070801e0912c88760563d7e998b439c38197de4b5b9a8bdf3ce33" id=af87359e-a599-4eca-8425-c797ac9e6757 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:38 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:38.171160956Z" level=info msg="Removed container 21fbadce3c7070801e0912c88760563d7e998b439c38197de4b5b9a8bdf3ce33: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper" id=af87359e-a599-4eca-8425-c797ac9e6757 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.19125691Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=abb3732d-abc7-443d-b272-dff4a6299fb1 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.192637314Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=64943090-48a8-4dd4-a0f0-0e248d6e426b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.19404382Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b5205294-34d5-4b6c-928c-a8e3a942d85c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.194187909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.199947048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.200164022Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5224d0880c9c301eec779f03d4d81c80f85bf1cc0ddbb1597d76c32f189d5d9d/merged/etc/passwd: no such file or directory"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.200310075Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5224d0880c9c301eec779f03d4d81c80f85bf1cc0ddbb1597d76c32f189d5d9d/merged/etc/group: no such file or directory"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.200872578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.232724648Z" level=info msg="Created container f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f: kube-system/storage-provisioner/storage-provisioner" id=b5205294-34d5-4b6c-928c-a8e3a942d85c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.233409017Z" level=info msg="Starting container: f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f" id=4fd40f79-62e9-4e70-8569-408c20cd07c5 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:26:48 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:26:48.235506225Z" level=info msg="Started container" PID=1818 containerID=f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f description=kube-system/storage-provisioner/storage-provisioner id=4fd40f79-62e9-4e70-8569-408c20cd07c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67820aefb2c369a2b0f2e00aa163bb93638de0ad900d5d5e4da178b1f4bd92be
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.076596171Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d3c1e8a9-06b2-4a75-a4c5-c639816c3bc8 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.07981243Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=17415710-0156-4fe9-ae7c-2bbca051e2d8 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.080878077Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper" id=14f98366-8cb9-439a-9a2e-a608fa035eb5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.081059085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.087190206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.087742614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.115248556Z" level=info msg="Created container d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper" id=14f98366-8cb9-439a-9a2e-a608fa035eb5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.115757281Z" level=info msg="Starting container: d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81" id=85febb7f-df25-4f95-946c-bf7b2c47f0c0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.117734195Z" level=info msg="Started container" PID=1855 containerID=d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper id=85febb7f-df25-4f95-946c-bf7b2c47f0c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41c3f2b68ffd6a9d5178b12b198f8d740b6707f9b5b2d6908989b81b6c84c5fb
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.2231911Z" level=info msg="Removing container: 9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7" id=fbeae63c-6884-4e33-b26e-401f2e8aa14f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:27:00 default-k8s-diff-port-313784 crio[577]: time="2026-01-10T02:27:00.231827449Z" level=info msg="Removed container 9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2/dashboard-metrics-scraper" id=fbeae63c-6884-4e33-b26e-401f2e8aa14f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d9b4206d7f0ac       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   41c3f2b68ffd6       dashboard-metrics-scraper-867fb5f87b-shxh2             kubernetes-dashboard
	f9c6c31df7faa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   67820aefb2c36       storage-provisioner                                    kube-system
	2f752a6224d46       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   e68152f7c1559       kubernetes-dashboard-b84665fb8-cvzmq                   kubernetes-dashboard
	e9c87ab85de9c       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           50 seconds ago      Running             coredns                     0                   840a1753e8c35       coredns-7d764666f9-rhgg5                               kube-system
	8f9e78594e45a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   352555fb853f6       busybox                                                default
	85a1be9712215       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   fbc02dec624c5       kindnet-wbscw                                          kube-system
	998034535f5da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   67820aefb2c36       storage-provisioner                                    kube-system
	5a6b196ace135       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           50 seconds ago      Running             kube-proxy                  0                   31e5ac899a7f5       kube-proxy-6dcdf                                       kube-system
	35cfd8caca1ff       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           53 seconds ago      Running             kube-scheduler              0                   8acf12c49406c       kube-scheduler-default-k8s-diff-port-313784            kube-system
	fc29eda71f4bd       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           53 seconds ago      Running             kube-controller-manager     0                   9d111be729ca8       kube-controller-manager-default-k8s-diff-port-313784   kube-system
	b5de7f05c48c0       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           53 seconds ago      Running             etcd                        0                   70726124463f1       etcd-default-k8s-diff-port-313784                      kube-system
	6f7b3a029a3bc       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           53 seconds ago      Running             kube-apiserver              0                   e1893257de94c       kube-apiserver-default-k8s-diff-port-313784            kube-system
	
	
	==> coredns [e9c87ab85de9c59e7f2a0e811771f9c88502d8f5dbd60ccfb4eecf174cee932f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48099 - 42204 "HINFO IN 3807732134652601533.6838935619091275238. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071820786s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-313784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-313784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=default-k8s-diff-port-313784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:25:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-313784
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:47 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:47 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:47 +0000   Sat, 10 Jan 2026 02:25:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:26:47 +0000   Sat, 10 Jan 2026 02:25:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-313784
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                eaef45ee-c0a4-4074-89b2-25c5e6ae4f6a
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-rhgg5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-313784                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-wbscw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-313784             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-313784    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-6dcdf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-313784             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-shxh2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cvzmq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  105s  node-controller  Node default-k8s-diff-port-313784 event: Registered Node default-k8s-diff-port-313784 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node default-k8s-diff-port-313784 event: Registered Node default-k8s-diff-port-313784 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [b5de7f05c48c095e9fef4efb74abefe8eb07be5b286dca9f1e02db1c8c79c371] <==
	{"level":"info","ts":"2026-01-10T02:26:14.639275Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:26:14.639348Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T02:26:14.639422Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T02:26:14.639501Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2026-01-10T02:26:14.638808Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"dfc97eb0aae75b33","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-10T02:26:14.639596Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:26:14.641037Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:26:15.529932Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:15.529977Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:15.530018Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:15.530028Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:26:15.530042Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:26:15.530720Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T02:26:15.530738Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:26:15.530753Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:26:15.530759Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T02:26:15.531396Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:26:15.531429Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:26:15.531394Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:default-k8s-diff-port-313784 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:26:15.531706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:26:15.531730Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:26:15.533213Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:26:15.533505Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:26:15.535746Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:26:15.535844Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 02:27:08 up  1:09,  0 user,  load average: 2.69, 3.31, 2.35
	Linux default-k8s-diff-port-313784 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [85a1be97122158afba1c7dc996f622d1063cec041014b0cc8bbe1c378ba119d4] <==
	I0110 02:26:17.701734       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:26:17.701976       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 02:26:17.702159       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:26:17.702184       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:26:17.702201       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:26:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:26:17.901530       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:26:17.901593       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:26:17.901606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:26:18.002170       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:26:18.402380       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:26:18.402409       1 metrics.go:72] Registering metrics
	I0110 02:26:18.402476       1 controller.go:711] "Syncing nftables rules"
	I0110 02:26:27.901658       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:26:27.901721       1 main.go:301] handling current node
	I0110 02:26:37.902141       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:26:37.902201       1 main.go:301] handling current node
	I0110 02:26:47.902198       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:26:47.902251       1 main.go:301] handling current node
	I0110 02:26:57.902322       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:26:57.902364       1 main.go:301] handling current node
	I0110 02:27:07.902236       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 02:27:07.902285       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f7b3a029a3bc4ba4e3633368af6270be9e6945d669d649d76e7070308610a5d] <==
	I0110 02:26:16.477203       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:26:16.477214       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:26:16.477104       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 02:26:16.477299       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:26:16.477311       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:26:16.477347       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:26:16.477372       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:26:16.477640       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 02:26:16.477668       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:26:16.479961       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:16.484222       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0110 02:26:16.485147       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:26:16.494350       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 02:26:16.512793       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:26:16.718432       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:26:16.744818       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:26:16.760135       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:26:16.767615       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:26:16.774265       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:26:16.804677       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.80.27"}
	I0110 02:26:16.814275       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.205.23"}
	I0110 02:26:17.382463       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:26:20.074595       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:26:20.273381       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:26:20.324051       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fc29eda71f4bde30696f3da25f43c0e08c5a51d939a947924ad7303cd468a80f] <==
	I0110 02:26:19.624656       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.624657       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.624667       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.624790       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.624671       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625577       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625617       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625684       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625802       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625832       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625928       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625938       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625960       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625986       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.625803       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.626369       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.626369       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.627392       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.635710       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:26:19.724140       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:19.724157       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:26:19.724163       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:26:19.736293       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:20.277385       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0110 02:26:20.277478       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [5a6b196ace1351b1c9640bb3a22624c7d34f7250b1aa608bdb4d91bcb09f31b4] <==
	I0110 02:26:17.473849       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:26:17.556029       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:26:17.657062       1 shared_informer.go:377] "Caches are synced"
	I0110 02:26:17.657105       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0110 02:26:17.657212       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:26:17.677776       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:26:17.677822       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:26:17.683630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:26:17.684091       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:26:17.684128       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:26:17.685628       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:26:17.685649       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:26:17.685697       1 config.go:200] "Starting service config controller"
	I0110 02:26:17.685713       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:26:17.685719       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:26:17.685736       1 config.go:309] "Starting node config controller"
	I0110 02:26:17.685744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:26:17.685750       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:26:17.685719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:26:17.786067       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:26:17.786171       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:26:17.786168       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [35cfd8caca1ffb3ed069875a6f4df02737c571e205d4cb57ddce696a7018cd87] <==
	I0110 02:26:15.227400       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:26:16.413904       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:26:16.413936       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:26:16.413949       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:26:16.413958       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:26:16.446494       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:26:16.446596       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:26:16.449388       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:26:16.449419       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:26:16.449511       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:26:16.449556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:26:16.550973       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:26:29 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:29.137975     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-shxh2_kubernetes-dashboard(63345db1-2d3f-4c44-9a14-c2bade7afb21)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" podUID="63345db1-2d3f-4c44-9a14-c2bade7afb21"
	Jan 10 02:26:32 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:32.463318     743 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-313784" containerName="kube-apiserver"
	Jan 10 02:26:33 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:33.147169     743 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-313784" containerName="kube-apiserver"
	Jan 10 02:26:37 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:37.307779     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:37 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:37.307827     743 scope.go:122] "RemoveContainer" containerID="21fbadce3c7070801e0912c88760563d7e998b439c38197de4b5b9a8bdf3ce33"
	Jan 10 02:26:38 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:38.160479     743 scope.go:122] "RemoveContainer" containerID="21fbadce3c7070801e0912c88760563d7e998b439c38197de4b5b9a8bdf3ce33"
	Jan 10 02:26:38 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:38.160738     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:38 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:38.160776     743 scope.go:122] "RemoveContainer" containerID="9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7"
	Jan 10 02:26:38 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:38.161002     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-shxh2_kubernetes-dashboard(63345db1-2d3f-4c44-9a14-c2bade7afb21)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" podUID="63345db1-2d3f-4c44-9a14-c2bade7afb21"
	Jan 10 02:26:47 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:47.308134     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:26:47 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:47.308179     743 scope.go:122] "RemoveContainer" containerID="9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7"
	Jan 10 02:26:47 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:47.308397     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-shxh2_kubernetes-dashboard(63345db1-2d3f-4c44-9a14-c2bade7afb21)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" podUID="63345db1-2d3f-4c44-9a14-c2bade7afb21"
	Jan 10 02:26:48 default-k8s-diff-port-313784 kubelet[743]: I0110 02:26:48.190740     743 scope.go:122] "RemoveContainer" containerID="998034535f5da2818ee887132648e0f2c4ce8e2dd9984530238973083e214dad"
	Jan 10 02:26:49 default-k8s-diff-port-313784 kubelet[743]: E0110 02:26:49.418819     743 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-rhgg5" containerName="coredns"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: E0110 02:27:00.075996     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: I0110 02:27:00.076056     743 scope.go:122] "RemoveContainer" containerID="9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: I0110 02:27:00.221857     743 scope.go:122] "RemoveContainer" containerID="9e7326edb4b7b9386be50f130d4edd0ccd8b3dc67fa2e361a8bac426da3d26a7"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: E0110 02:27:00.222096     743 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" containerName="dashboard-metrics-scraper"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: I0110 02:27:00.222135     743 scope.go:122] "RemoveContainer" containerID="d9b4206d7f0ac2ce9f64b74410caa8a395bc0806ec26a1bd3692b1fb67ee1b81"
	Jan 10 02:27:00 default-k8s-diff-port-313784 kubelet[743]: E0110 02:27:00.222356     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-shxh2_kubernetes-dashboard(63345db1-2d3f-4c44-9a14-c2bade7afb21)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-shxh2" podUID="63345db1-2d3f-4c44-9a14-c2bade7afb21"
	Jan 10 02:27:03 default-k8s-diff-port-313784 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:27:03 default-k8s-diff-port-313784 kubelet[743]: I0110 02:27:03.125752     743 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:27:03 default-k8s-diff-port-313784 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:27:03 default-k8s-diff-port-313784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:27:03 default-k8s-diff-port-313784 systemd[1]: kubelet.service: Consumed 1.601s CPU time.
	
	
	==> kubernetes-dashboard [2f752a6224d46906bbadd0c1a12d9b82fc4244b9f4c554a86ec2fefa82fb86f8] <==
	2026/01/10 02:26:24 Using namespace: kubernetes-dashboard
	2026/01/10 02:26:24 Using in-cluster config to connect to apiserver
	2026/01/10 02:26:24 Using secret token for csrf signing
	2026/01/10 02:26:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:26:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:26:24 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:26:24 Generating JWE encryption key
	2026/01/10 02:26:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:26:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:26:24 Initializing JWE encryption key from synchronized object
	2026/01/10 02:26:24 Creating in-cluster Sidecar client
	2026/01/10 02:26:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:26:24 Serving insecurely on HTTP port: 9090
	2026/01/10 02:26:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:26:24 Starting overwatch
	
	
	==> storage-provisioner [998034535f5da2818ee887132648e0f2c4ce8e2dd9984530238973083e214dad] <==
	I0110 02:26:17.442180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:26:47.446289       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f9c6c31df7faa226393bd7a5fd37124095965b07d5980dfde148ce171edf798f] <==
	I0110 02:26:48.249643       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:26:48.257248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:26:48.257291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:26:48.260048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:51.714814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:55.975567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:26:59.573991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:02.628333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:05.650841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:05.655834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:27:05.656031       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:27:05.656238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313784_49a694cf-c504-4e2a-9dd4-6ff4e77429db!
	I0110 02:27:05.656284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"156faf5c-a028-4a8b-8a5c-5f90c9b1d50d", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-313784_49a694cf-c504-4e2a-9dd4-6ff4e77429db became leader
	W0110 02:27:05.659205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:05.665012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:27:05.756459       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313784_49a694cf-c504-4e2a-9dd4-6ff4e77429db!
	W0110 02:27:07.668713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:27:07.672679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784: exit status 2 (331.535733ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-313784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-843779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-843779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (246.348341ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-843779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-843779
helpers_test.go:244: (dbg) docker inspect newest-cni-843779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2",
	        "Created": "2026-01-10T02:26:43.222970574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 340341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:26:43.273000059Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/hosts",
	        "LogPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2-json.log",
	        "Name": "/newest-cni-843779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-843779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-843779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2",
	                "LowerDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-843779",
	                "Source": "/var/lib/docker/volumes/newest-cni-843779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-843779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-843779",
	                "name.minikube.sigs.k8s.io": "newest-cni-843779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e73515892c321914448bdf361ec912ec633810d3321fa48ef76071d330935ffc",
	            "SandboxKey": "/var/run/docker/netns/e73515892c32",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-843779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be608ab0d563e9be569420f07b60320d59caab1b6e3a4268ebcdc8a31d692309",
	                    "EndpointID": "24dadbfd196415f6c11c4c8c589ff6094dfad14a89072c539500570f34696673",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "36:dc:1e:b4:c1:de",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-843779",
	                        "d1a10faa6dbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843779 -n newest-cni-843779
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-843779 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:25 UTC │
	│ start   │ -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-313784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-313784 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:25 UTC │ 10 Jan 26 02:26 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ embed-certs-872415 image list --format=json                                                                                                                                                                                                   │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p embed-certs-872415 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:27 UTC │
	│ image   │ no-preload-190877 image list --format=json                                                                                                                                                                                                    │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p no-preload-190877 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ default-k8s-diff-port-313784 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ pause   │ -p default-k8s-diff-port-313784 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:26:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:26:38.395701  338461 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:38.395954  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.395962  338461 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:38.395966  338461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:38.396156  338461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:26:38.396626  338461 out.go:368] Setting JSON to false
	I0110 02:26:38.397992  338461 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4147,"bootTime":1768007851,"procs":455,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:26:38.398046  338461 start.go:143] virtualization: kvm guest
	I0110 02:26:38.399795  338461 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:26:38.400823  338461 notify.go:221] Checking for updates...
	I0110 02:26:38.400839  338461 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:26:38.401952  338461 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:26:38.403142  338461 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:26:38.404397  338461 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:26:38.405512  338461 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:26:38.406412  338461 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:26:38.407953  338461 config.go:182] Loaded profile config "default-k8s-diff-port-313784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408047  338461 config.go:182] Loaded profile config "embed-certs-872415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408138  338461 config.go:182] Loaded profile config "no-preload-190877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:38.408217  338461 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:26:38.434056  338461 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:26:38.434192  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.492093  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.480726897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.492192  338461 docker.go:319] overlay module found
	I0110 02:26:38.493713  338461 out.go:179] * Using the docker driver based on user configuration
	I0110 02:26:38.494702  338461 start.go:309] selected driver: docker
	I0110 02:26:38.494716  338461 start.go:928] validating driver "docker" against <nil>
	I0110 02:26:38.494729  338461 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:26:38.495359  338461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:26:38.549669  338461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:26:38.540019441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:26:38.549849  338461 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 02:26:38.549882  338461 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 02:26:38.550158  338461 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:26:38.552024  338461 out.go:179] * Using Docker driver with root privileges
	I0110 02:26:38.553057  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:38.553113  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:38.553122  338461 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:26:38.553168  338461 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:38.554252  338461 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:26:38.555155  338461 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:26:38.556242  338461 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:26:38.557247  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:38.557276  338461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:26:38.557288  338461 cache.go:65] Caching tarball of preloaded images
	I0110 02:26:38.557342  338461 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:26:38.557382  338461 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:26:38.557395  338461 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:26:38.557518  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:38.557546  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json: {Name:mk980e5e7d4c45bf0d1bdc96021cfe1dfa9563b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:38.578353  338461 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:26:38.578368  338461 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:26:38.578383  338461 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:26:38.578406  338461 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:26:38.578491  338461 start.go:364] duration metric: took 71.777µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:26:38.578513  338461 start.go:93] Provisioning new machine with config: &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:26:38.578574  338461 start.go:125] createHost starting for "" (driver="docker")
	W0110 02:26:37.984376  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:40.485189  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:38.579999  338461 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:26:38.580204  338461 start.go:159] libmachine.API.Create for "newest-cni-843779" (driver="docker")
	I0110 02:26:38.580227  338461 client.go:173] LocalClient.Create starting
	I0110 02:26:38.580292  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem
	I0110 02:26:38.580322  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580343  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580394  338461 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem
	I0110 02:26:38.580414  338461 main.go:144] libmachine: Decoding PEM data...
	I0110 02:26:38.580432  338461 main.go:144] libmachine: Parsing certificate...
	I0110 02:26:38.580717  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:26:38.596966  338461 cli_runner.go:211] docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:26:38.597028  338461 network_create.go:284] running [docker network inspect newest-cni-843779] to gather additional debugging logs...
	I0110 02:26:38.597049  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779
	W0110 02:26:38.613182  338461 cli_runner.go:211] docker network inspect newest-cni-843779 returned with exit code 1
	I0110 02:26:38.613209  338461 network_create.go:287] error running [docker network inspect newest-cni-843779]: docker network inspect newest-cni-843779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-843779 not found
	I0110 02:26:38.613225  338461 network_create.go:289] output of [docker network inspect newest-cni-843779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-843779 not found
	
	** /stderr **
	I0110 02:26:38.613341  338461 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:38.630396  338461 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
	I0110 02:26:38.631029  338461 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6b93c57cdce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:4c:65:68:38:06} reservation:<nil>}
	I0110 02:26:38.631780  338461 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c494a40b219 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:38:5d:78:96:da} reservation:<nil>}
	I0110 02:26:38.632287  338461 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a77220e3dd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:16:c1:44:08:5d} reservation:<nil>}
	I0110 02:26:38.633099  338461 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea9360}
	I0110 02:26:38.633118  338461 network_create.go:124] attempt to create docker network newest-cni-843779 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:26:38.633156  338461 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-843779 newest-cni-843779
	I0110 02:26:38.681030  338461 network_create.go:108] docker network newest-cni-843779 192.168.85.0/24 created
	I0110 02:26:38.681058  338461 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-843779" container
	I0110 02:26:38.681110  338461 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:26:38.698815  338461 cli_runner.go:164] Run: docker volume create newest-cni-843779 --label name.minikube.sigs.k8s.io=newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:26:38.715947  338461 oci.go:103] Successfully created a docker volume newest-cni-843779
	I0110 02:26:38.716014  338461 cli_runner.go:164] Run: docker run --rm --name newest-cni-843779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --entrypoint /usr/bin/test -v newest-cni-843779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:26:39.139879  338461 oci.go:107] Successfully prepared a docker volume newest-cni-843779
	I0110 02:26:39.139985  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:39.140001  338461 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:26:39.140074  338461 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:26:43.148608  338461 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-843779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.00849465s)
	I0110 02:26:43.148642  338461 kic.go:203] duration metric: took 4.008637849s to extract preloaded images to volume ...
	W0110 02:26:43.148739  338461 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 02:26:43.148767  338461 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 02:26:43.148804  338461 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:26:43.204668  338461 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-843779 --name newest-cni-843779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-843779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-843779 --network newest-cni-843779 --ip 192.168.85.2 --volume newest-cni-843779:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	W0110 02:26:42.983710  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:44.983765  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	W0110 02:26:46.984713  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:43.527936  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Running}}
	I0110 02:26:43.548293  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.567102  338461 cli_runner.go:164] Run: docker exec newest-cni-843779 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:26:43.613558  338461 oci.go:144] the created container "newest-cni-843779" has a running status.
	I0110 02:26:43.613590  338461 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa...
	I0110 02:26:43.684437  338461 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:26:43.713852  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.736219  338461 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:26:43.736257  338461 kic_runner.go:114] Args: [docker exec --privileged newest-cni-843779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:26:43.785594  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:26:43.805775  338461 machine.go:94] provisionDockerMachine start ...
	I0110 02:26:43.805896  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:43.831840  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:43.832223  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:43.832251  338461 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:26:43.833032  338461 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35784->127.0.0.1:33130: read: connection reset by peer
	I0110 02:26:46.969499  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:26:46.969526  338461 ubuntu.go:182] provisioning hostname "newest-cni-843779"
	I0110 02:26:46.969593  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:46.991696  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:46.992031  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:46.992054  338461 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-843779 && echo "newest-cni-843779" | sudo tee /etc/hostname
	I0110 02:26:47.136043  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:26:47.136128  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.157826  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:47.158110  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:47.158139  338461 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-843779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-843779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-843779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:26:47.285266  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:26:47.285296  338461 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:26:47.285326  338461 ubuntu.go:190] setting up certificates
	I0110 02:26:47.285339  338461 provision.go:84] configureAuth start
	I0110 02:26:47.285388  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:47.306123  338461 provision.go:143] copyHostCerts
	I0110 02:26:47.306186  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:26:47.306200  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:26:47.306285  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:26:47.306444  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:26:47.306459  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:26:47.306503  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:26:47.306586  338461 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:26:47.306597  338461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:26:47.306634  338461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:26:47.306711  338461 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.newest-cni-843779 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-843779]
	I0110 02:26:47.449507  338461 provision.go:177] copyRemoteCerts
	I0110 02:26:47.449566  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:26:47.449610  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.470425  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:47.566450  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:26:47.585229  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:26:47.602746  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:26:47.620541  338461 provision.go:87] duration metric: took 335.183446ms to configureAuth
	I0110 02:26:47.620570  338461 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:26:47.620817  338461 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:26:47.620959  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.640508  338461 main.go:144] libmachine: Using SSH client type: native
	I0110 02:26:47.640816  338461 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I0110 02:26:47.640845  338461 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:26:47.907810  338461 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:26:47.907838  338461 machine.go:97] duration metric: took 4.102037206s to provisionDockerMachine
	I0110 02:26:47.907850  338461 client.go:176] duration metric: took 9.327615152s to LocalClient.Create
	I0110 02:26:47.907873  338461 start.go:167] duration metric: took 9.327668738s to libmachine.API.Create "newest-cni-843779"
	I0110 02:26:47.907895  338461 start.go:293] postStartSetup for "newest-cni-843779" (driver="docker")
	I0110 02:26:47.907908  338461 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:26:47.907974  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:26:47.908018  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:47.928412  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.024000  338461 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:26:48.027481  338461 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:26:48.027509  338461 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:26:48.027520  338461 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:26:48.027567  338461 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:26:48.027683  338461 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:26:48.027841  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:26:48.035276  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:48.055326  338461 start.go:296] duration metric: took 147.417971ms for postStartSetup
	I0110 02:26:48.055713  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:48.075567  338461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:26:48.075921  338461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:26:48.075971  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.097098  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.195147  338461 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:26:48.201200  338461 start.go:128] duration metric: took 9.622613291s to createHost
	I0110 02:26:48.201223  338461 start.go:83] releasing machines lock for "newest-cni-843779", held for 9.622720302s
	I0110 02:26:48.201284  338461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:26:48.220675  338461 ssh_runner.go:195] Run: cat /version.json
	I0110 02:26:48.220716  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.220775  338461 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:26:48.220842  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:26:48.243579  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.243844  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:26:48.405575  338461 ssh_runner.go:195] Run: systemctl --version
	I0110 02:26:48.411977  338461 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:26:48.446783  338461 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:26:48.451861  338461 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:26:48.451946  338461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:26:48.478187  338461 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0110 02:26:48.478210  338461 start.go:496] detecting cgroup driver to use...
	I0110 02:26:48.478243  338461 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:26:48.478288  338461 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:26:48.496294  338461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:26:48.508994  338461 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:26:48.509050  338461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:26:48.526619  338461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:26:48.546200  338461 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:26:48.630754  338461 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:26:48.721548  338461 docker.go:234] disabling docker service ...
	I0110 02:26:48.721596  338461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:26:48.741103  338461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:26:48.754750  338461 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:26:48.849106  338461 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:26:48.926371  338461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:26:48.938571  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:26:48.953463  338461 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:26:48.953530  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.967831  338461 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:26:48.967929  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.981096  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:48.994270  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.003708  338461 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:26:49.012357  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.021802  338461 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.034747  338461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:26:49.043418  338461 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:26:49.050386  338461 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:26:49.057269  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:49.130961  338461 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:26:49.285916  338461 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:26:49.285981  338461 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:26:49.289691  338461 start.go:574] Will wait 60s for crictl version
	I0110 02:26:49.289750  338461 ssh_runner.go:195] Run: which crictl
	I0110 02:26:49.293070  338461 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:26:49.316456  338461 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:26:49.316525  338461 ssh_runner.go:195] Run: crio --version
	I0110 02:26:49.343597  338461 ssh_runner.go:195] Run: crio --version
	I0110 02:26:49.371114  338461 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:26:49.372159  338461 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:26:49.389573  338461 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:26:49.393453  338461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:49.404679  338461 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:26:49.405677  338461 kubeadm.go:884] updating cluster {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:26:49.405793  338461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:26:49.405837  338461 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:49.440734  338461 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:49.440758  338461 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:26:49.440812  338461 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:26:49.469164  338461 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:26:49.469186  338461 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:26:49.469194  338461 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:26:49.469275  338461 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-843779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:26:49.469338  338461 ssh_runner.go:195] Run: crio config
	I0110 02:26:49.516476  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:49.516496  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:49.516510  338461 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:26:49.516530  338461 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-843779 NodeName:newest-cni-843779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:26:49.516639  338461 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-843779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:26:49.516699  338461 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:26:49.524516  338461 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:26:49.524573  338461 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:26:49.532047  338461 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:26:49.543799  338461 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:26:49.557580  338461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 02:26:49.569161  338461 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:26:49.572423  338461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:26:49.581744  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:26:49.662065  338461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:26:49.689947  338461 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779 for IP: 192.168.85.2
	I0110 02:26:49.689968  338461 certs.go:195] generating shared ca certs ...
	I0110 02:26:49.689987  338461 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.690118  338461 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:26:49.690155  338461 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:26:49.690165  338461 certs.go:257] generating profile certs ...
	I0110 02:26:49.690213  338461 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key
	I0110 02:26:49.690230  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt with IP's: []
	I0110 02:26:49.756357  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt ...
	I0110 02:26:49.756381  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.crt: {Name:mk133e41b9f631c1d31398329e120a6d2e8c733e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.756536  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key ...
	I0110 02:26:49.756548  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key: {Name:mk1a6751a5bfd0db1a5029ef4003e6943a863573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.756626  338461 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5
	I0110 02:26:49.756641  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:26:49.820417  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 ...
	I0110 02:26:49.820450  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5: {Name:mk37a665bf86ff3fb7ea7a72608ed18515127576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.820601  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5 ...
	I0110 02:26:49.820613  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5: {Name:mkcfe121d9bb2cde5a393290decd7e10f53e5ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.820724  338461 certs.go:382] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt.80ef10c5 -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt
	I0110 02:26:49.820839  338461 certs.go:386] copying /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5 -> /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key
	I0110 02:26:49.820918  338461 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key
	I0110 02:26:49.820934  338461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt with IP's: []
	I0110 02:26:49.878096  338461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt ...
	I0110 02:26:49.878116  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt: {Name:mk7fbd29eafac26d0fd2ce98341bca7262aa29d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.878239  338461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key ...
	I0110 02:26:49.878251  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key: {Name:mk6eed83d5bd4bb5410d906db9b88c82acb84bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:26:49.878412  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:26:49.878451  338461 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:26:49.878461  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:26:49.878484  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:26:49.878507  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:26:49.878530  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:26:49.878568  338461 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:26:49.879163  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:26:49.896970  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:26:49.913364  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:26:49.930220  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:26:49.947718  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:26:49.965127  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:26:49.984031  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:26:50.002271  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:26:50.020474  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:26:50.039132  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:26:50.055600  338461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:26:50.075592  338461 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:26:50.090453  338461 ssh_runner.go:195] Run: openssl version
	I0110 02:26:50.096522  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.103714  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:26:50.111443  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.115011  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.115064  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:26:50.153459  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:26:50.160763  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14086.pem /etc/ssl/certs/51391683.0
	I0110 02:26:50.168017  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.174985  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:26:50.182015  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.185610  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.185650  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:26:50.221124  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:26:50.228348  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/140862.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:26:50.235190  338461 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.242739  338461 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:26:50.249485  338461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.252727  338461 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.252768  338461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:26:50.288639  338461 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:26:50.296037  338461 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:26:50.303287  338461 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:26:50.306535  338461 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:26:50.306586  338461 kubeadm.go:401] StartCluster: {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:26:50.306674  338461 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:26:50.306717  338461 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:26:50.333629  338461 cri.go:96] found id: ""
	I0110 02:26:50.333685  338461 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:26:50.341547  338461 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:26:50.349445  338461 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:26:50.349507  338461 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:26:50.360715  338461 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:26:50.360732  338461 kubeadm.go:158] found existing configuration files:
	
	I0110 02:26:50.360788  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:26:50.386174  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:26:50.386241  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:26:50.395383  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:26:50.410044  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:26:50.410107  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:26:50.418596  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:26:50.428393  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:26:50.428444  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:26:50.436482  338461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:26:50.444188  338461 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:26:50.444240  338461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:26:50.452657  338461 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:26:50.494415  338461 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:26:50.494503  338461 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:26:50.563724  338461 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:26:50.563829  338461 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0110 02:26:50.563876  338461 kubeadm.go:319] OS: Linux
	I0110 02:26:50.563960  338461 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:26:50.564045  338461 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:26:50.564142  338461 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:26:50.564234  338461 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:26:50.564307  338461 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:26:50.564383  338461 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:26:50.564454  338461 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:26:50.564517  338461 kubeadm.go:319] CGROUPS_IO: enabled
	I0110 02:26:50.622544  338461 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:26:50.622713  338461 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:26:50.622878  338461 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:26:50.630207  338461 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0110 02:26:48.986976  333054 pod_ready.go:104] pod "coredns-7d764666f9-rhgg5" is not "Ready", error: <nil>
	I0110 02:26:49.484087  333054 pod_ready.go:94] pod "coredns-7d764666f9-rhgg5" is "Ready"
	I0110 02:26:49.484114  333054 pod_ready.go:86] duration metric: took 31.505548695s for pod "coredns-7d764666f9-rhgg5" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.486734  333054 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.490527  333054 pod_ready.go:94] pod "etcd-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.490552  333054 pod_ready.go:86] duration metric: took 3.797789ms for pod "etcd-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.492326  333054 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.495598  333054 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.495619  333054 pod_ready.go:86] duration metric: took 3.274816ms for pod "kube-apiserver-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.499483  333054 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.682853  333054 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:49.682877  333054 pod_ready.go:86] duration metric: took 183.376493ms for pod "kube-controller-manager-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:49.882938  333054 pod_ready.go:83] waiting for pod "kube-proxy-6dcdf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.283112  333054 pod_ready.go:94] pod "kube-proxy-6dcdf" is "Ready"
	I0110 02:26:50.283137  333054 pod_ready.go:86] duration metric: took 400.175094ms for pod "kube-proxy-6dcdf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.483261  333054 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.883249  333054 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-313784" is "Ready"
	I0110 02:26:50.883281  333054 pod_ready.go:86] duration metric: took 399.994421ms for pod "kube-scheduler-default-k8s-diff-port-313784" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:26:50.883295  333054 pod_ready.go:40] duration metric: took 32.90814338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:26:50.927315  333054 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:26:50.939799  333054 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-313784" cluster and "default" namespace by default
	I0110 02:26:50.638660  338461 out.go:252]   - Generating certificates and keys ...
	I0110 02:26:50.638761  338461 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:26:50.638847  338461 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:26:50.748673  338461 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:26:50.808044  338461 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:26:50.825988  338461 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:26:51.053436  338461 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:26:51.189722  338461 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:26:51.189935  338461 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-843779] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:26:51.265783  338461 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:26:51.266038  338461 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-843779] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:26:51.453075  338461 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:26:51.463944  338461 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:26:51.492453  338461 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:26:51.492568  338461 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:26:51.639156  338461 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:26:51.713669  338461 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:26:51.737323  338461 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:26:51.771362  338461 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:26:51.803764  338461 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:26:51.804316  338461 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:26:51.807706  338461 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:26:51.809141  338461 out.go:252]   - Booting up control plane ...
	I0110 02:26:51.809224  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:26:51.809305  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:26:51.810160  338461 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:26:51.836089  338461 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:26:51.836223  338461 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:26:51.843051  338461 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:26:51.843358  338461 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:26:51.843421  338461 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:26:51.940122  338461 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:26:51.940253  338461 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:26:52.440845  338461 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.777212ms
	I0110 02:26:52.443950  338461 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:26:52.444086  338461 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0110 02:26:52.444167  338461 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:26:52.444244  338461 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 02:26:53.449008  338461 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004897675s
	I0110 02:26:54.312692  338461 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.868609736s
	I0110 02:26:55.946035  338461 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501977175s
	I0110 02:26:55.963146  338461 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:26:55.973142  338461 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:26:55.980674  338461 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:26:55.980865  338461 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-843779 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:26:55.987964  338461 kubeadm.go:319] [bootstrap-token] Using token: 1ffugu.jcse9fz4pyvkzq7m
	I0110 02:26:55.989224  338461 out.go:252]   - Configuring RBAC rules ...
	I0110 02:26:55.989358  338461 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:26:55.992014  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:26:55.996727  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:26:55.998941  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:26:56.001013  338461 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:26:56.003983  338461 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:26:56.351224  338461 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:26:56.766422  338461 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:26:57.351738  338461 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:26:57.352701  338461 kubeadm.go:319] 
	I0110 02:26:57.352806  338461 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:26:57.352825  338461 kubeadm.go:319] 
	I0110 02:26:57.352959  338461 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:26:57.352968  338461 kubeadm.go:319] 
	I0110 02:26:57.352998  338461 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:26:57.353086  338461 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:26:57.353183  338461 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:26:57.353201  338461 kubeadm.go:319] 
	I0110 02:26:57.353278  338461 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:26:57.353288  338461 kubeadm.go:319] 
	I0110 02:26:57.353328  338461 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:26:57.353333  338461 kubeadm.go:319] 
	I0110 02:26:57.353390  338461 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:26:57.353464  338461 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:26:57.353532  338461 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:26:57.353544  338461 kubeadm.go:319] 
	I0110 02:26:57.353648  338461 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:26:57.353744  338461 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:26:57.353752  338461 kubeadm.go:319] 
	I0110 02:26:57.353880  338461 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1ffugu.jcse9fz4pyvkzq7m \
	I0110 02:26:57.354051  338461 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 \
	I0110 02:26:57.354081  338461 kubeadm.go:319] 	--control-plane 
	I0110 02:26:57.354090  338461 kubeadm.go:319] 
	I0110 02:26:57.354183  338461 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:26:57.354189  338461 kubeadm.go:319] 
	I0110 02:26:57.354317  338461 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1ffugu.jcse9fz4pyvkzq7m \
	I0110 02:26:57.354475  338461 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:093b0c5308ebe6b788955328596c4c485082eadd010b862ad787e602035f71a4 
	I0110 02:26:57.356934  338461 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0110 02:26:57.357052  338461 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:26:57.357076  338461 cni.go:84] Creating CNI manager for ""
	I0110 02:26:57.357083  338461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:26:57.358426  338461 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:26:57.359425  338461 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:26:57.363469  338461 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:26:57.363485  338461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:26:57.376178  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:26:57.578386  338461 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:26:57.578562  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:57.578594  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-843779 minikube.k8s.io/updated_at=2026_01_10T02_26_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=newest-cni-843779 minikube.k8s.io/primary=true
	I0110 02:26:57.588227  338461 ops.go:34] apiserver oom_adj: -16
	I0110 02:26:57.657708  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:58.158330  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:58.658496  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:59.158389  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:26:59.657973  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:00.158625  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:00.657790  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:01.158769  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:01.658356  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:02.157852  338461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:27:02.222922  338461 kubeadm.go:1114] duration metric: took 4.644421285s to wait for elevateKubeSystemPrivileges
	I0110 02:27:02.222958  338461 kubeadm.go:403] duration metric: took 11.916375506s to StartCluster
	I0110 02:27:02.222979  338461 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:02.223054  338461 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:02.224315  338461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:02.224602  338461 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:27:02.224625  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:27:02.224720  338461 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:27:02.224804  338461 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-843779"
	I0110 02:27:02.224819  338461 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-843779"
	I0110 02:27:02.224830  338461 addons.go:70] Setting default-storageclass=true in profile "newest-cni-843779"
	I0110 02:27:02.224854  338461 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:02.224854  338461 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-843779"
	I0110 02:27:02.224821  338461 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:02.225428  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.225523  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.226702  338461 out.go:179] * Verifying Kubernetes components...
	I0110 02:27:02.227842  338461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:02.250197  338461 addons.go:239] Setting addon default-storageclass=true in "newest-cni-843779"
	I0110 02:27:02.250249  338461 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:02.250829  338461 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:02.251301  338461 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:27:02.252449  338461 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:02.252463  338461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:27:02.252502  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:02.281535  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:02.282356  338461 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:02.282376  338461 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:27:02.282444  338461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:02.310280  338461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:02.321999  338461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:27:02.376661  338461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:27:02.399734  338461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:02.431196  338461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:02.524677  338461 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0110 02:27:02.525738  338461 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:27:02.525801  338461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:27:02.735965  338461 api_server.go:72] duration metric: took 511.326051ms to wait for apiserver process to appear ...
	I0110 02:27:02.735993  338461 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:27:02.736010  338461 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:02.742014  338461 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:27:02.743062  338461 api_server.go:141] control plane version: v1.35.0
	I0110 02:27:02.743090  338461 api_server.go:131] duration metric: took 7.089818ms to wait for apiserver health ...
	I0110 02:27:02.743103  338461 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:27:02.745969  338461 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 02:27:02.746876  338461 system_pods.go:59] 8 kube-system pods found
	I0110 02:27:02.746927  338461 system_pods.go:61] "coredns-7d764666f9-zmtqf" [bab0ce6c-6845-4a76-aba8-25902122e535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:02.746940  338461 system_pods.go:61] "etcd-newest-cni-843779" [fdd4d85a-8248-4455-82c1-256311f58e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:27:02.746952  338461 system_pods.go:61] "kindnet-p5kwz" [a4006850-95c0-4567-9f85-7914adcf599d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:27:02.746962  338461 system_pods.go:61] "kube-apiserver-newest-cni-843779" [6c2775ff-47fa-4806-9434-1cf525435963] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:27:02.746972  338461 system_pods.go:61] "kube-controller-manager-newest-cni-843779" [3d61a2c1-6564-4d15-9c8b-1eaefd4c6878] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:27:02.746986  338461 system_pods.go:61] "kube-proxy-9djhz" [d97a2a8a-cfa4-414f-ad6d-47af95479498] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:27:02.747020  338461 system_pods.go:61] "kube-scheduler-newest-cni-843779" [b6848e97-9fd5-4a56-b28d-0f581cc698b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:27:02.747038  338461 system_pods.go:61] "storage-provisioner" [4f1dd65f-c7de-48ab-8d72-fcc925bbd6be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:02.747051  338461 system_pods.go:74] duration metric: took 3.939591ms to wait for pod list to return data ...
	I0110 02:27:02.747065  338461 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:27:02.747347  338461 addons.go:530] duration metric: took 522.62294ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:27:02.749774  338461 default_sa.go:45] found service account: "default"
	I0110 02:27:02.749806  338461 default_sa.go:55] duration metric: took 2.7205ms for default service account to be created ...
	I0110 02:27:02.749819  338461 kubeadm.go:587] duration metric: took 525.183285ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:27:02.749838  338461 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:27:02.752066  338461 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:27:02.752090  338461 node_conditions.go:123] node cpu capacity is 8
	I0110 02:27:02.752105  338461 node_conditions.go:105] duration metric: took 2.261829ms to run NodePressure ...
	I0110 02:27:02.752117  338461 start.go:242] waiting for startup goroutines ...
	I0110 02:27:03.028702  338461 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-843779" context rescaled to 1 replicas
	I0110 02:27:03.028738  338461 start.go:247] waiting for cluster config update ...
	I0110 02:27:03.028749  338461 start.go:256] writing updated cluster config ...
	I0110 02:27:03.029031  338461 ssh_runner.go:195] Run: rm -f paused
	I0110 02:27:03.083609  338461 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:27:03.085383  338461 out.go:179] * Done! kubectl is now configured to use "newest-cni-843779" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:26:52 newest-cni-843779 crio[773]: time="2026-01-10T02:26:52.789252161Z" level=info msg="Started container" PID=1216 containerID=7fcc254e8d04d541df465ff49413e2ef65c38089332d16ce0f2f7804e6fc6401 description=kube-system/kube-controller-manager-newest-cni-843779/kube-controller-manager id=8708e81d-e3f7-4e32-80aa-461119216dbc name=/runtime.v1.RuntimeService/StartContainer sandboxID=8677f8ac6bd3d291db16d67191b68de55dedebe820d70d1466f1ccc8c29b49e1
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.643319521Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-9djhz/POD" id=f3685b94-d896-4802-aa0f-15a0deedcde8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.643399457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.644121126Z" level=info msg="Running pod sandbox: kube-system/kindnet-p5kwz/POD" id=1bd8b875-28b4-4537-8048-d5fbfbbc7ff7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.644185707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.647465712Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f3685b94-d896-4802-aa0f-15a0deedcde8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.648259901Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1bd8b875-28b4-4537-8048-d5fbfbbc7ff7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.649059725Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.649644757Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.649944502Z" level=info msg="Ran pod sandbox 2a809247de15d066c1045f8624993ace8d2d84d4a0ee9ca6fadba02b6cac0232 with infra container: kube-system/kube-proxy-9djhz/POD" id=f3685b94-d896-4802-aa0f-15a0deedcde8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.650474744Z" level=info msg="Ran pod sandbox 31d4fd7c6dd98254d5d09bb296de7057b1cf6b84cc60b56a2c215c7a60a0ef7c with infra container: kube-system/kindnet-p5kwz/POD" id=1bd8b875-28b4-4537-8048-d5fbfbbc7ff7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.651149978Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=ed2ef6be-f75c-450b-ad7a-b00334c6d1c2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.651623378Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=e2da1cb2-0f42-4dae-9496-921feecd9734 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.651738727Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=e2da1cb2-0f42-4dae-9496-921feecd9734 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.651922409Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=e2da1cb2-0f42-4dae-9496-921feecd9734 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.652236912Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=daf1172d-a57a-4411-a041-09ff2376de4c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.652933668Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=2526e7fb-a2a4-4d49-b8ab-64adae3a008b name=/runtime.v1.ImageService/PullImage
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.653408911Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.6560798Z" level=info msg="Creating container: kube-system/kube-proxy-9djhz/kube-proxy" id=cf119ebb-e6ba-411b-a269-2b1b99e53139 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.656190965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.66094778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.661568004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.703879812Z" level=info msg="Created container add9ce50e42700010ee45d3ddcde01db0d09b16dde08a0c9d4525fe173c5e2b7: kube-system/kube-proxy-9djhz/kube-proxy" id=cf119ebb-e6ba-411b-a269-2b1b99e53139 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.704665917Z" level=info msg="Starting container: add9ce50e42700010ee45d3ddcde01db0d09b16dde08a0c9d4525fe173c5e2b7" id=6ed1564b-af51-4839-8f4c-a1c1b7bd98ad name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:27:02 newest-cni-843779 crio[773]: time="2026-01-10T02:27:02.707928063Z" level=info msg="Started container" PID=1590 containerID=add9ce50e42700010ee45d3ddcde01db0d09b16dde08a0c9d4525fe173c5e2b7 description=kube-system/kube-proxy-9djhz/kube-proxy id=6ed1564b-af51-4839-8f4c-a1c1b7bd98ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a809247de15d066c1045f8624993ace8d2d84d4a0ee9ca6fadba02b6cac0232
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	add9ce50e4270       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   1 second ago        Running             kube-proxy                0                   2a809247de15d       kube-proxy-9djhz                            kube-system
	db882e806870e       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   11 seconds ago      Running             etcd                      0                   97ec614b2dbbb       etcd-newest-cni-843779                      kube-system
	7fcc254e8d04d       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   11 seconds ago      Running             kube-controller-manager   0                   8677f8ac6bd3d       kube-controller-manager-newest-cni-843779   kube-system
	7575e87e1c2e6       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   11 seconds ago      Running             kube-scheduler            0                   e3e90788a7f08       kube-scheduler-newest-cni-843779            kube-system
	016355acfa191       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   11 seconds ago      Running             kube-apiserver            0                   f1a5fdece5f31       kube-apiserver-newest-cni-843779            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-843779
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-843779
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=newest-cni-843779
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_26_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:26:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-843779
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:26:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:26:56 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:26:56 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:26:56 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 02:26:56 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-843779
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                d81a3470-fe6c-4f6c-853e-984980245e0f
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-843779                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-p5kwz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-843779             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-843779    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-9djhz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-843779             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-843779 event: Registered Node newest-cni-843779 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [db882e806870e771ea9072a79abf48564a9376d8583edfb61a6175bc23c5b1ca] <==
	{"level":"info","ts":"2026-01-10T02:26:52.823927Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:26:53.416477Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:26:53.416568Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:26:53.416636Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-10T02:26:53.416655Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:26:53.416671Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:53.417199Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:53.417265Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:26:53.417284Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:53.417293Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:26:53.418053Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-843779 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:26:53.418085Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:26:53.418137Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:26:53.418153Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:26:53.418332Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:26:53.418390Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:26:53.418648Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:26:53.418736Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:26:53.418795Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:26:53.418852Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:26:53.419006Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:26:53.419374Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:26:53.419595Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:26:53.421786Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:26:53.421829Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 02:27:04 up  1:09,  0 user,  load average: 2.84, 3.34, 2.36
	Linux newest-cni-843779 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [016355acfa1916be4809b009e6252a134b87544144f0725c162a56e1ddef0b78] <==
	I0110 02:26:54.352271       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:26:54.352660       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:26:54.355232       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:26:54.356329       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:26:54.375982       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:26:54.376245       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:26:54.379550       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:26:54.544321       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:26:55.253643       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:26:55.257127       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:26:55.257143       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:26:55.663073       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:26:55.693958       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:26:55.756420       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:26:55.762556       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0110 02:26:55.763650       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:26:55.767392       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:26:56.288863       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:26:56.756513       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:26:56.765597       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:26:56.771354       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:27:02.040974       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:27:02.192110       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:27:02.196659       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:27:02.291557       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7fcc254e8d04d541df465ff49413e2ef65c38089332d16ce0f2f7804e6fc6401] <==
	I0110 02:27:01.094699       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094689       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094554       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094818       1 range_allocator.go:177] "Sending events to api server"
	I0110 02:27:01.094823       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094828       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094844       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094858       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094866       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 02:27:01.094872       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:01.094877       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094947       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.094995       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.095063       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:27:01.095182       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-843779"
	I0110 02:27:01.095229       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:27:01.095494       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.095510       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.101406       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.102299       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:01.103180       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-843779" podCIDRs=["10.42.0.0/24"]
	I0110 02:27:01.195825       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:01.195840       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:27:01.195844       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:27:01.203145       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [add9ce50e42700010ee45d3ddcde01db0d09b16dde08a0c9d4525fe173c5e2b7] <==
	I0110 02:27:02.751939       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:27:02.810035       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:02.910912       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:02.910955       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 02:27:02.911043       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:27:02.928880       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:27:02.928962       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:27:02.934021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:27:02.934460       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:27:02.934479       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:27:02.935755       1 config.go:200] "Starting service config controller"
	I0110 02:27:02.935785       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:27:02.935800       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:27:02.935832       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:27:02.935835       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:27:02.935841       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:27:02.935958       1 config.go:309] "Starting node config controller"
	I0110 02:27:02.935973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:27:03.036708       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:27:03.036739       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:27:03.036757       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:27:03.036772       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7575e87e1c2e6280560772ea1c50792596b97edafe740f61084dc97ec44f838c] <==
	E0110 02:26:54.311822       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:26:54.311863       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:26:54.311926       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:26:54.311945       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:26:54.311997       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:26:54.313149       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:26:54.313151       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:26:54.313167       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:26:54.313244       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:26:54.313300       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:26:54.313306       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:26:54.313323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:26:55.123100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:26:55.129899       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 02:26:55.156613       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:26:55.177181       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:26:55.195971       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:26:55.242059       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:26:55.253996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:26:55.272816       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:26:55.308132       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:26:55.425346       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:26:55.447148       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:26:55.470304       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I0110 02:26:57.706727       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:26:57 newest-cni-843779 kubelet[1300]: E0110 02:26:57.602678    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-843779" containerName="kube-controller-manager"
	Jan 10 02:26:57 newest-cni-843779 kubelet[1300]: I0110 02:26:57.650159    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-843779" podStartSLOduration=1.650138997 podStartE2EDuration="1.650138997s" podCreationTimestamp="2026-01-10 02:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:26:57.640090776 +0000 UTC m=+1.141167780" watchObservedRunningTime="2026-01-10 02:26:57.650138997 +0000 UTC m=+1.151215980"
	Jan 10 02:26:57 newest-cni-843779 kubelet[1300]: I0110 02:26:57.658256    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-843779" podStartSLOduration=2.658238031 podStartE2EDuration="2.658238031s" podCreationTimestamp="2026-01-10 02:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:26:57.650387891 +0000 UTC m=+1.151464879" watchObservedRunningTime="2026-01-10 02:26:57.658238031 +0000 UTC m=+1.159315029"
	Jan 10 02:26:57 newest-cni-843779 kubelet[1300]: I0110 02:26:57.658364    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-843779" podStartSLOduration=1.658355511 podStartE2EDuration="1.658355511s" podCreationTimestamp="2026-01-10 02:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:26:57.658201274 +0000 UTC m=+1.159278263" watchObservedRunningTime="2026-01-10 02:26:57.658355511 +0000 UTC m=+1.159432498"
	Jan 10 02:26:57 newest-cni-843779 kubelet[1300]: I0110 02:26:57.666187    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-843779" podStartSLOduration=1.666175619 podStartE2EDuration="1.666175619s" podCreationTimestamp="2026-01-10 02:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:26:57.666055965 +0000 UTC m=+1.167132952" watchObservedRunningTime="2026-01-10 02:26:57.666175619 +0000 UTC m=+1.167252619"
	Jan 10 02:26:58 newest-cni-843779 kubelet[1300]: E0110 02:26:58.595063    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-843779" containerName="kube-scheduler"
	Jan 10 02:26:58 newest-cni-843779 kubelet[1300]: E0110 02:26:58.595115    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-843779" containerName="kube-controller-manager"
	Jan 10 02:26:58 newest-cni-843779 kubelet[1300]: E0110 02:26:58.595406    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-843779" containerName="etcd"
	Jan 10 02:26:58 newest-cni-843779 kubelet[1300]: E0110 02:26:58.595574    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-843779" containerName="kube-apiserver"
	Jan 10 02:26:59 newest-cni-843779 kubelet[1300]: E0110 02:26:59.596571    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-843779" containerName="etcd"
	Jan 10 02:26:59 newest-cni-843779 kubelet[1300]: E0110 02:26:59.596689    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-843779" containerName="kube-controller-manager"
	Jan 10 02:26:59 newest-cni-843779 kubelet[1300]: E0110 02:26:59.596770    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-843779" containerName="kube-scheduler"
	Jan 10 02:27:00 newest-cni-843779 kubelet[1300]: E0110 02:27:00.597743    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-843779" containerName="etcd"
	Jan 10 02:27:00 newest-cni-843779 kubelet[1300]: E0110 02:27:00.974964    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-843779" containerName="kube-apiserver"
	Jan 10 02:27:01 newest-cni-843779 kubelet[1300]: I0110 02:27:01.202228    1300 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 10 02:27:01 newest-cni-843779 kubelet[1300]: I0110 02:27:01.202950    1300 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 10 02:27:02 newest-cni-843779 kubelet[1300]: I0110 02:27:02.409473    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d97a2a8a-cfa4-414f-ad6d-47af95479498-xtables-lock\") pod \"kube-proxy-9djhz\" (UID: \"d97a2a8a-cfa4-414f-ad6d-47af95479498\") " pod="kube-system/kube-proxy-9djhz"
	Jan 10 02:27:02 newest-cni-843779 kubelet[1300]: I0110 02:27:02.409530    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-cni-cfg\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:02 newest-cni-843779 kubelet[1300]: I0110 02:27:02.409560    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-xtables-lock\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:02 newest-cni-843779 kubelet[1300]: I0110 02:27:02.409586    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d97a2a8a-cfa4-414f-ad6d-47af95479498-kube-proxy\") pod \"kube-proxy-9djhz\" (UID: \"d97a2a8a-cfa4-414f-ad6d-47af95479498\") " pod="kube-system/kube-proxy-9djhz"
	Jan 10 02:27:02 newest-cni-843779 kubelet[1300]: I0110 02:27:02.409609    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrdk\" (UniqueName: \"kubernetes.io/projected/a4006850-95c0-4567-9f85-7914adcf599d-kube-api-access-ncrdk\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:02 newest-cni-843779 kubelet[1300]: I0110 02:27:02.409642    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwchk\" (UniqueName: \"kubernetes.io/projected/d97a2a8a-cfa4-414f-ad6d-47af95479498-kube-api-access-xwchk\") pod \"kube-proxy-9djhz\" (UID: \"d97a2a8a-cfa4-414f-ad6d-47af95479498\") " pod="kube-system/kube-proxy-9djhz"
	Jan 10 02:27:02 newest-cni-843779 kubelet[1300]: I0110 02:27:02.409665    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-lib-modules\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:02 newest-cni-843779 kubelet[1300]: I0110 02:27:02.409686    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d97a2a8a-cfa4-414f-ad6d-47af95479498-lib-modules\") pod \"kube-proxy-9djhz\" (UID: \"d97a2a8a-cfa4-414f-ad6d-47af95479498\") " pod="kube-system/kube-proxy-9djhz"
	Jan 10 02:27:03 newest-cni-843779 kubelet[1300]: I0110 02:27:03.620034    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-9djhz" podStartSLOduration=1.6200171939999999 podStartE2EDuration="1.620017194s" podCreationTimestamp="2026-01-10 02:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:27:03.619842891 +0000 UTC m=+7.120919880" watchObservedRunningTime="2026-01-10 02:27:03.620017194 +0000 UTC m=+7.121094181"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843779 -n newest-cni-843779
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-843779 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-zmtqf storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner: exit status 1 (67.215091ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-zmtqf" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-843779 --alsologtostderr -v=1
E0110 02:27:25.608170   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-843779 --alsologtostderr -v=1: exit status 80 (2.158471143s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-843779 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:27:24.858956  350692 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:27:24.859419  350692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:27:24.859431  350692 out.go:374] Setting ErrFile to fd 2...
	I0110 02:27:24.859437  350692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:27:24.859630  350692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:27:24.859881  350692 out.go:368] Setting JSON to false
	I0110 02:27:24.859916  350692 mustload.go:66] Loading cluster: newest-cni-843779
	I0110 02:27:24.860244  350692 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:24.860631  350692 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:24.878479  350692 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:24.878724  350692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:27:24.934182  350692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 02:27:24.923424255 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:27:24.934857  350692 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-843779 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:27:24.937333  350692 out.go:179] * Pausing node newest-cni-843779 ... 
	I0110 02:27:24.938409  350692 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:24.938673  350692 ssh_runner.go:195] Run: systemctl --version
	I0110 02:27:24.938717  350692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:24.957298  350692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:25.048801  350692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:27:25.060304  350692 pause.go:52] kubelet running: true
	I0110 02:27:25.060355  350692 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:27:25.191377  350692 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:27:25.191459  350692 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:27:25.254625  350692 cri.go:96] found id: "312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387"
	I0110 02:27:25.254652  350692 cri.go:96] found id: "9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2"
	I0110 02:27:25.254659  350692 cri.go:96] found id: "470a11b3e628822295b4651043fb6903544751366fe277f30a0931ce7748b97e"
	I0110 02:27:25.254664  350692 cri.go:96] found id: "76f4729b16b0dabcb7fc6d98e95915aa779a343d0b22a534dbeb7682cfad0613"
	I0110 02:27:25.254668  350692 cri.go:96] found id: "8337df843b206d2a93e6ebac0e91e1a58ec130dd0b50f6aaeea1220cb9f6449b"
	I0110 02:27:25.254680  350692 cri.go:96] found id: "598e38003a29ae61f6432b2965e523888a3cd93289a7772ba60d488c231739dd"
	I0110 02:27:25.254685  350692 cri.go:96] found id: ""
	I0110 02:27:25.254730  350692 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:27:25.265799  350692 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:25Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:27:25.555262  350692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:27:25.567673  350692 pause.go:52] kubelet running: false
	I0110 02:27:25.567715  350692 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:27:25.675974  350692 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:27:25.676048  350692 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:27:25.741396  350692 cri.go:96] found id: "312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387"
	I0110 02:27:25.741417  350692 cri.go:96] found id: "9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2"
	I0110 02:27:25.741421  350692 cri.go:96] found id: "470a11b3e628822295b4651043fb6903544751366fe277f30a0931ce7748b97e"
	I0110 02:27:25.741425  350692 cri.go:96] found id: "76f4729b16b0dabcb7fc6d98e95915aa779a343d0b22a534dbeb7682cfad0613"
	I0110 02:27:25.741429  350692 cri.go:96] found id: "8337df843b206d2a93e6ebac0e91e1a58ec130dd0b50f6aaeea1220cb9f6449b"
	I0110 02:27:25.741436  350692 cri.go:96] found id: "598e38003a29ae61f6432b2965e523888a3cd93289a7772ba60d488c231739dd"
	I0110 02:27:25.741440  350692 cri.go:96] found id: ""
	I0110 02:27:25.741482  350692 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:27:25.960531  350692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:27:25.974685  350692 pause.go:52] kubelet running: false
	I0110 02:27:25.974749  350692 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:27:26.086355  350692 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:27:26.086429  350692 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:27:26.148195  350692 cri.go:96] found id: "312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387"
	I0110 02:27:26.148219  350692 cri.go:96] found id: "9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2"
	I0110 02:27:26.148223  350692 cri.go:96] found id: "470a11b3e628822295b4651043fb6903544751366fe277f30a0931ce7748b97e"
	I0110 02:27:26.148226  350692 cri.go:96] found id: "76f4729b16b0dabcb7fc6d98e95915aa779a343d0b22a534dbeb7682cfad0613"
	I0110 02:27:26.148230  350692 cri.go:96] found id: "8337df843b206d2a93e6ebac0e91e1a58ec130dd0b50f6aaeea1220cb9f6449b"
	I0110 02:27:26.148235  350692 cri.go:96] found id: "598e38003a29ae61f6432b2965e523888a3cd93289a7772ba60d488c231739dd"
	I0110 02:27:26.148238  350692 cri.go:96] found id: ""
	I0110 02:27:26.148274  350692 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:27:26.761104  350692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:27:26.773672  350692 pause.go:52] kubelet running: false
	I0110 02:27:26.773722  350692 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:27:26.879588  350692 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:27:26.879674  350692 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:27:26.942186  350692 cri.go:96] found id: "312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387"
	I0110 02:27:26.942212  350692 cri.go:96] found id: "9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2"
	I0110 02:27:26.942217  350692 cri.go:96] found id: "470a11b3e628822295b4651043fb6903544751366fe277f30a0931ce7748b97e"
	I0110 02:27:26.942220  350692 cri.go:96] found id: "76f4729b16b0dabcb7fc6d98e95915aa779a343d0b22a534dbeb7682cfad0613"
	I0110 02:27:26.942223  350692 cri.go:96] found id: "8337df843b206d2a93e6ebac0e91e1a58ec130dd0b50f6aaeea1220cb9f6449b"
	I0110 02:27:26.942226  350692 cri.go:96] found id: "598e38003a29ae61f6432b2965e523888a3cd93289a7772ba60d488c231739dd"
	I0110 02:27:26.942229  350692 cri.go:96] found id: ""
	I0110 02:27:26.942270  350692 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:27:26.955543  350692 out.go:203] 
	W0110 02:27:26.956495  350692 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:27:26.956509  350692 out.go:285] * 
	* 
	W0110 02:27:26.958197  350692 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:27:26.959358  350692 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-843779 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-843779
helpers_test.go:244: (dbg) docker inspect newest-cni-843779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2",
	        "Created": "2026-01-10T02:26:43.222970574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 348900,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:27:13.620303381Z",
	            "FinishedAt": "2026-01-10T02:27:12.827028957Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/hosts",
	        "LogPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2-json.log",
	        "Name": "/newest-cni-843779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-843779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-843779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2",
	                "LowerDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-843779",
	                "Source": "/var/lib/docker/volumes/newest-cni-843779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-843779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-843779",
	                "name.minikube.sigs.k8s.io": "newest-cni-843779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cc3a29192986fbbad37fcbc9139897ad11a3a4dd4bd1306498241e94272fc981",
	            "SandboxKey": "/var/run/docker/netns/cc3a29192986",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-843779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be608ab0d563e9be569420f07b60320d59caab1b6e3a4268ebcdc8a31d692309",
	                    "EndpointID": "02e626343f52103f2f4aa012d3415604befd9c6fd7c72e9ee3f974c62972ff2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "06:9d:de:01:7f:c3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-843779",
	                        "d1a10faa6dbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843779 -n newest-cni-843779
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843779 -n newest-cni-843779: exit status 2 (311.564496ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-843779 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ embed-certs-872415 image list --format=json                                                                                                                                                                                                   │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p embed-certs-872415 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:27 UTC │
	│ image   │ no-preload-190877 image list --format=json                                                                                                                                                                                                    │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p no-preload-190877 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ default-k8s-diff-port-313784 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ pause   │ -p default-k8s-diff-port-313784 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ stop    │ -p newest-cni-843779 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ delete  │ -p default-k8s-diff-port-313784                                                                                                                                                                                                               │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ delete  │ -p default-k8s-diff-port-313784                                                                                                                                                                                                               │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ addons  │ enable dashboard -p newest-cni-843779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ image   │ newest-cni-843779 image list --format=json                                                                                                                                                                                                    │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ pause   │ -p newest-cni-843779 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:27:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:27:13.412834  348700 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:27:13.413108  348700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:27:13.413117  348700 out.go:374] Setting ErrFile to fd 2...
	I0110 02:27:13.413121  348700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:27:13.413295  348700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:27:13.413727  348700 out.go:368] Setting JSON to false
	I0110 02:27:13.414655  348700 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4182,"bootTime":1768007851,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:27:13.414705  348700 start.go:143] virtualization: kvm guest
	I0110 02:27:13.416288  348700 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:27:13.417374  348700 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:27:13.417382  348700 notify.go:221] Checking for updates...
	I0110 02:27:13.419395  348700 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:27:13.420577  348700 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:13.421552  348700 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:27:13.422548  348700 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:27:13.423445  348700 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:27:13.424748  348700 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:13.425236  348700 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:27:13.447632  348700 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:27:13.447749  348700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:27:13.500432  348700 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:44 SystemTime:2026-01-10 02:27:13.490190521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:27:13.500529  348700 docker.go:319] overlay module found
	I0110 02:27:13.501880  348700 out.go:179] * Using the docker driver based on existing profile
	I0110 02:27:13.502913  348700 start.go:309] selected driver: docker
	I0110 02:27:13.502925  348700 start.go:928] validating driver "docker" against &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:27:13.503005  348700 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:27:13.503497  348700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:27:13.553816  348700 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:44 SystemTime:2026-01-10 02:27:13.544733156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:27:13.554104  348700 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:27:13.554129  348700 cni.go:84] Creating CNI manager for ""
	I0110 02:27:13.554178  348700 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:27:13.554206  348700 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:27:13.555539  348700 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:27:13.556539  348700 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:27:13.557528  348700 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:27:13.558450  348700 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:27:13.558487  348700 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:27:13.558498  348700 cache.go:65] Caching tarball of preloaded images
	I0110 02:27:13.558547  348700 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:27:13.558588  348700 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:27:13.558603  348700 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:27:13.558728  348700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:27:13.577350  348700 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:27:13.577367  348700 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:27:13.577381  348700 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:27:13.577405  348700 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:27:13.577471  348700 start.go:364] duration metric: took 37.71µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:27:13.577488  348700 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:27:13.577492  348700 fix.go:54] fixHost starting: 
	I0110 02:27:13.577672  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:13.595344  348700 fix.go:112] recreateIfNeeded on newest-cni-843779: state=Stopped err=<nil>
	W0110 02:27:13.595382  348700 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:27:13.596809  348700 out.go:252] * Restarting existing docker container for "newest-cni-843779" ...
	I0110 02:27:13.596858  348700 cli_runner.go:164] Run: docker start newest-cni-843779
	I0110 02:27:13.819844  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:13.838060  348700 kic.go:430] container "newest-cni-843779" state is running.
	I0110 02:27:13.838482  348700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:27:13.856205  348700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:27:13.856370  348700 machine.go:94] provisionDockerMachine start ...
	I0110 02:27:13.856426  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:13.875477  348700 main.go:144] libmachine: Using SSH client type: native
	I0110 02:27:13.875737  348700 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I0110 02:27:13.875751  348700 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:27:13.876381  348700 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49606->127.0.0.1:33135: read: connection reset by peer
	I0110 02:27:17.003548  348700 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:27:17.003605  348700 ubuntu.go:182] provisioning hostname "newest-cni-843779"
	I0110 02:27:17.003702  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.021557  348700 main.go:144] libmachine: Using SSH client type: native
	I0110 02:27:17.021777  348700 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I0110 02:27:17.021791  348700 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-843779 && echo "newest-cni-843779" | sudo tee /etc/hostname
	I0110 02:27:17.152935  348700 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:27:17.153006  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.170223  348700 main.go:144] libmachine: Using SSH client type: native
	I0110 02:27:17.170515  348700 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I0110 02:27:17.170538  348700 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-843779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-843779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-843779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:27:17.294158  348700 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:27:17.294183  348700 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:27:17.294203  348700 ubuntu.go:190] setting up certificates
	I0110 02:27:17.294217  348700 provision.go:84] configureAuth start
	I0110 02:27:17.294261  348700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:27:17.312494  348700 provision.go:143] copyHostCerts
	I0110 02:27:17.312546  348700 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:27:17.312560  348700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:27:17.312627  348700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:27:17.312777  348700 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:27:17.312788  348700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:27:17.312815  348700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:27:17.312896  348700 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:27:17.312905  348700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:27:17.312936  348700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:27:17.313012  348700 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.newest-cni-843779 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-843779]
	I0110 02:27:17.335669  348700 provision.go:177] copyRemoteCerts
	I0110 02:27:17.335719  348700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:27:17.335762  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.352837  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:17.444471  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:27:17.460985  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:27:17.477001  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:27:17.492804  348700 provision.go:87] duration metric: took 198.56873ms to configureAuth
	I0110 02:27:17.492834  348700 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:27:17.493025  348700 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:17.493133  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.510543  348700 main.go:144] libmachine: Using SSH client type: native
	I0110 02:27:17.510750  348700 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I0110 02:27:17.510768  348700 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:27:17.781191  348700 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:27:17.781214  348700 machine.go:97] duration metric: took 3.924831738s to provisionDockerMachine
	I0110 02:27:17.781224  348700 start.go:293] postStartSetup for "newest-cni-843779" (driver="docker")
	I0110 02:27:17.781234  348700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:27:17.781281  348700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:27:17.781316  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.799029  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:17.890728  348700 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:27:17.894112  348700 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:27:17.894133  348700 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:27:17.894142  348700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:27:17.894187  348700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:27:17.894254  348700 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:27:17.894345  348700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:27:17.901469  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:27:17.917829  348700 start.go:296] duration metric: took 136.594255ms for postStartSetup
	I0110 02:27:17.917920  348700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:27:17.917959  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.935974  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:18.025528  348700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:27:18.029828  348700 fix.go:56] duration metric: took 4.452329842s for fixHost
	I0110 02:27:18.029853  348700 start.go:83] releasing machines lock for "newest-cni-843779", held for 4.452371078s
	I0110 02:27:18.029928  348700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:27:18.047243  348700 ssh_runner.go:195] Run: cat /version.json
	I0110 02:27:18.047290  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:18.047330  348700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:27:18.047398  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:18.066550  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:18.066893  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:19.469781  348700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (1.422419408s)
	I0110 02:27:19.469861  348700 ssh_runner.go:235] Completed: cat /version.json: (1.422590338s)
	I0110 02:27:19.470030  348700 ssh_runner.go:195] Run: systemctl --version
	I0110 02:27:19.476476  348700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:27:19.509832  348700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:27:19.514308  348700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:27:19.514362  348700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:27:19.521936  348700 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:27:19.521960  348700 start.go:496] detecting cgroup driver to use...
	I0110 02:27:19.521996  348700 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:27:19.522045  348700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:27:19.536177  348700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:27:19.547454  348700 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:27:19.547509  348700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:27:19.560409  348700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:27:19.571287  348700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:27:19.647180  348700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:27:19.722648  348700 docker.go:234] disabling docker service ...
	I0110 02:27:19.722708  348700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:27:19.735703  348700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:27:19.746704  348700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:27:19.822317  348700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:27:19.901182  348700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:27:19.912432  348700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:27:19.925394  348700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:27:19.925438  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.933552  348700 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:27:19.933606  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.942063  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.950172  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.958533  348700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:27:19.966227  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.974518  348700 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.982480  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.990530  348700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:27:19.997290  348700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:27:20.004145  348700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:20.077918  348700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:27:20.202996  348700 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:27:20.203049  348700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:27:20.206756  348700 start.go:574] Will wait 60s for crictl version
	I0110 02:27:20.206824  348700 ssh_runner.go:195] Run: which crictl
	I0110 02:27:20.210112  348700 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:27:20.233516  348700 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:27:20.233585  348700 ssh_runner.go:195] Run: crio --version
	I0110 02:27:20.259284  348700 ssh_runner.go:195] Run: crio --version
	I0110 02:27:20.287176  348700 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:27:20.288379  348700 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:27:20.305317  348700 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:27:20.309249  348700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:27:20.320346  348700 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:27:20.321353  348700 kubeadm.go:884] updating cluster {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:27:20.321523  348700 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:27:20.321586  348700 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:27:20.355710  348700 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:27:20.355731  348700 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:27:20.355782  348700 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:27:20.380250  348700 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:27:20.380270  348700 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:27:20.380276  348700 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:27:20.380367  348700 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-843779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:27:20.380436  348700 ssh_runner.go:195] Run: crio config
	I0110 02:27:20.422479  348700 cni.go:84] Creating CNI manager for ""
	I0110 02:27:20.422500  348700 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:27:20.422516  348700 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:27:20.422538  348700 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-843779 NodeName:newest-cni-843779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:27:20.423143  348700 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-843779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:27:20.423234  348700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:27:20.431905  348700 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:27:20.431961  348700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:27:20.439199  348700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:27:20.450857  348700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:27:20.462208  348700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 02:27:20.473604  348700 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:27:20.476806  348700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:27:20.485751  348700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:20.567596  348700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:27:20.591080  348700 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779 for IP: 192.168.85.2
	I0110 02:27:20.591104  348700 certs.go:195] generating shared ca certs ...
	I0110 02:27:20.591123  348700 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:20.591247  348700 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:27:20.591286  348700 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:27:20.591295  348700 certs.go:257] generating profile certs ...
	I0110 02:27:20.591394  348700 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key
	I0110 02:27:20.591456  348700 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5
	I0110 02:27:20.591495  348700 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key
	I0110 02:27:20.591605  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:27:20.591636  348700 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:27:20.591646  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:27:20.591670  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:27:20.591695  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:27:20.591720  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:27:20.591761  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:27:20.592406  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:27:20.610962  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:27:20.629119  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:27:20.648276  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:27:20.669552  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:27:20.687110  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:27:20.702878  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:27:20.718615  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:27:20.734371  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:27:20.750009  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:27:20.765836  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:27:20.782375  348700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:27:20.793896  348700 ssh_runner.go:195] Run: openssl version
	I0110 02:27:20.799742  348700 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:27:20.806576  348700 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:27:20.813425  348700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:27:20.816847  348700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:27:20.816900  348700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:27:20.850170  348700 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:27:20.857485  348700 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:27:20.864445  348700 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:27:20.871151  348700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:27:20.874455  348700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:27:20.874499  348700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:27:20.907366  348700 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:27:20.914005  348700 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:27:20.920577  348700 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:27:20.927314  348700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:27:20.930578  348700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:27:20.930617  348700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:27:20.965180  348700 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:27:20.973127  348700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:27:20.976785  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:27:21.009953  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:27:21.043044  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:27:21.076437  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:27:21.119666  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:27:21.164341  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:27:21.214751  348700 kubeadm.go:401] StartCluster: {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:27:21.214861  348700 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:27:21.214934  348700 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:27:21.252879  348700 cri.go:96] found id: "470a11b3e628822295b4651043fb6903544751366fe277f30a0931ce7748b97e"
	I0110 02:27:21.252925  348700 cri.go:96] found id: "76f4729b16b0dabcb7fc6d98e95915aa779a343d0b22a534dbeb7682cfad0613"
	I0110 02:27:21.252931  348700 cri.go:96] found id: "8337df843b206d2a93e6ebac0e91e1a58ec130dd0b50f6aaeea1220cb9f6449b"
	I0110 02:27:21.252935  348700 cri.go:96] found id: "598e38003a29ae61f6432b2965e523888a3cd93289a7772ba60d488c231739dd"
	I0110 02:27:21.252940  348700 cri.go:96] found id: ""
	I0110 02:27:21.252986  348700 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:27:21.264469  348700 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:21Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:27:21.264626  348700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:27:21.272145  348700 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:27:21.272160  348700 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:27:21.272196  348700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:27:21.279020  348700 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:27:21.279416  348700 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-843779" does not appear in /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:21.279537  348700 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-10552/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-843779" cluster setting kubeconfig missing "newest-cni-843779" context setting]
	I0110 02:27:21.279826  348700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:21.281041  348700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:27:21.288090  348700 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 02:27:21.288112  348700 kubeadm.go:602] duration metric: took 15.947367ms to restartPrimaryControlPlane
	I0110 02:27:21.288120  348700 kubeadm.go:403] duration metric: took 73.381828ms to StartCluster
	I0110 02:27:21.288146  348700 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:21.288199  348700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:21.288673  348700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:21.288870  348700 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:27:21.289024  348700 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:27:21.289103  348700 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:21.289130  348700 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-843779"
	I0110 02:27:21.289151  348700 addons.go:70] Setting default-storageclass=true in profile "newest-cni-843779"
	I0110 02:27:21.289166  348700 addons.go:70] Setting dashboard=true in profile "newest-cni-843779"
	I0110 02:27:21.289194  348700 addons.go:239] Setting addon dashboard=true in "newest-cni-843779"
	W0110 02:27:21.289206  348700 addons.go:248] addon dashboard should already be in state true
	I0110 02:27:21.289234  348700 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:21.289180  348700 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-843779"
	I0110 02:27:21.289154  348700 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-843779"
	W0110 02:27:21.289286  348700 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:27:21.289306  348700 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:21.289560  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:21.289725  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:21.289728  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:21.291507  348700 out.go:179] * Verifying Kubernetes components...
	I0110 02:27:21.292720  348700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:21.313343  348700 addons.go:239] Setting addon default-storageclass=true in "newest-cni-843779"
	W0110 02:27:21.313366  348700 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:27:21.313392  348700 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:21.313757  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:21.314233  348700 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:27:21.315153  348700 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:27:21.315211  348700 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:21.315227  348700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:27:21.315284  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:21.317164  348700 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:27:21.318115  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:27:21.318130  348700 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:27:21.318180  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:21.341783  348700 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:21.341804  348700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:27:21.341861  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:21.342993  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:21.352476  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:21.368166  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:21.442386  348700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:27:21.454043  348700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:21.456758  348700 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:27:21.456813  348700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:27:21.460499  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:27:21.460512  348700 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:27:21.473816  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:27:21.473834  348700 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:27:21.475779  348700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:21.487704  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:27:21.487722  348700 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:27:21.500734  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:27:21.500756  348700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:27:21.513691  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:27:21.513712  348700 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:27:21.526659  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:27:21.526686  348700 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:27:21.539165  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:27:21.539190  348700 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:27:21.551121  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:27:21.551137  348700 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:27:21.563183  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:27:21.563202  348700 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:27:21.574937  348700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:27:23.179076  348700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.725000195s)
	I0110 02:27:23.179147  348700 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.722310802s)
	I0110 02:27:23.179187  348700 api_server.go:72] duration metric: took 1.8902513s to wait for apiserver process to appear ...
	I0110 02:27:23.179197  348700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.703390497s)
	I0110 02:27:23.179200  348700 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:27:23.179328  348700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.6043615s)
	I0110 02:27:23.179330  348700 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:23.180734  348700 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-843779 addons enable metrics-server
	
	I0110 02:27:23.186643  348700 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:27:23.186666  348700 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:27:23.192112  348700 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:27:23.193113  348700 addons.go:530] duration metric: took 1.90409785s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:27:23.679812  348700 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:23.684847  348700 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:27:23.684872  348700 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:27:24.179482  348700 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:24.183404  348700 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:27:24.184315  348700 api_server.go:141] control plane version: v1.35.0
	I0110 02:27:24.184336  348700 api_server.go:131] duration metric: took 1.005018895s to wait for apiserver health ...
	I0110 02:27:24.184344  348700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:27:24.187571  348700 system_pods.go:59] 8 kube-system pods found
	I0110 02:27:24.187603  348700 system_pods.go:61] "coredns-7d764666f9-zmtqf" [bab0ce6c-6845-4a76-aba8-25902122e535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:24.187616  348700 system_pods.go:61] "etcd-newest-cni-843779" [fdd4d85a-8248-4455-82c1-256311f58e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:27:24.187628  348700 system_pods.go:61] "kindnet-p5kwz" [a4006850-95c0-4567-9f85-7914adcf599d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:27:24.187642  348700 system_pods.go:61] "kube-apiserver-newest-cni-843779" [6c2775ff-47fa-4806-9434-1cf525435963] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:27:24.187652  348700 system_pods.go:61] "kube-controller-manager-newest-cni-843779" [3d61a2c1-6564-4d15-9c8b-1eaefd4c6878] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:27:24.187665  348700 system_pods.go:61] "kube-proxy-9djhz" [d97a2a8a-cfa4-414f-ad6d-47af95479498] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:27:24.187676  348700 system_pods.go:61] "kube-scheduler-newest-cni-843779" [b6848e97-9fd5-4a56-b28d-0f581cc698b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:27:24.187683  348700 system_pods.go:61] "storage-provisioner" [4f1dd65f-c7de-48ab-8d72-fcc925bbd6be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:24.187701  348700 system_pods.go:74] duration metric: took 3.351677ms to wait for pod list to return data ...
	I0110 02:27:24.187710  348700 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:27:24.189782  348700 default_sa.go:45] found service account: "default"
	I0110 02:27:24.189797  348700 default_sa.go:55] duration metric: took 2.080785ms for default service account to be created ...
	I0110 02:27:24.189806  348700 kubeadm.go:587] duration metric: took 2.900872401s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:27:24.189823  348700 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:27:24.191534  348700 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:27:24.191559  348700 node_conditions.go:123] node cpu capacity is 8
	I0110 02:27:24.191575  348700 node_conditions.go:105] duration metric: took 1.746819ms to run NodePressure ...
	I0110 02:27:24.191591  348700 start.go:242] waiting for startup goroutines ...
	I0110 02:27:24.191600  348700 start.go:247] waiting for cluster config update ...
	I0110 02:27:24.191614  348700 start.go:256] writing updated cluster config ...
	I0110 02:27:24.191933  348700 ssh_runner.go:195] Run: rm -f paused
	I0110 02:27:24.242149  348700 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:27:24.244130  348700 out.go:179] * Done! kubectl is now configured to use "newest-cni-843779" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.961546829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.964694623Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9252035b-7d3c-4db6-8b6d-5f9549d11eea name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.965427953Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=db76147f-b4b8-4051-9e4b-6f9b329fceb8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.966328034Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.966827913Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.967308521Z" level=info msg="Ran pod sandbox 3a6744aebdfd2e40860fdb6bca2c297b2e560bba89b94ecb7144eeae936467ee with infra container: kube-system/kindnet-p5kwz/POD" id=9252035b-7d3c-4db6-8b6d-5f9549d11eea name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.967492287Z" level=info msg="Ran pod sandbox 096c20ff3c59cebacc4a3b73cbc529c8475552518e07e212e20f1851d3c34657 with infra container: kube-system/kube-proxy-9djhz/POD" id=db76147f-b4b8-4051-9e4b-6f9b329fceb8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.96848847Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=a9d41207-7cc3-4a18-9aaf-9e184e69b415 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.968490926Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=2d516e0b-8e8f-4367-880d-2222f7e8dad9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.969525919Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=139ae996-f29f-499e-a9fd-72d42309eed2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.969527629Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=5b533515-5ad9-42a7-8686-f38e888e4c91 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.970689436Z" level=info msg="Creating container: kube-system/kube-proxy-9djhz/kube-proxy" id=041ca2b1-c16f-4f57-bc3c-010264a5cee8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.970820251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.970826251Z" level=info msg="Creating container: kube-system/kindnet-p5kwz/kindnet-cni" id=0bf2ddc7-c6b5-41c2-912a-00a601e0d31c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.970989244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.975499147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.976090412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.976248457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.976807881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.004161828Z" level=info msg="Created container 312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387: kube-system/kindnet-p5kwz/kindnet-cni" id=0bf2ddc7-c6b5-41c2-912a-00a601e0d31c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.004592561Z" level=info msg="Starting container: 312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387" id=460a8110-589d-47e6-8504-c50598e57621 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.00618194Z" level=info msg="Started container" PID=1054 containerID=312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387 description=kube-system/kindnet-p5kwz/kindnet-cni id=460a8110-589d-47e6-8504-c50598e57621 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a6744aebdfd2e40860fdb6bca2c297b2e560bba89b94ecb7144eeae936467ee
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.006779531Z" level=info msg="Created container 9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2: kube-system/kube-proxy-9djhz/kube-proxy" id=041ca2b1-c16f-4f57-bc3c-010264a5cee8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.007204103Z" level=info msg="Starting container: 9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2" id=0dcf07cd-b7b4-4c0c-93cb-d99a714a8082 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.009644991Z" level=info msg="Started container" PID=1055 containerID=9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2 description=kube-system/kube-proxy-9djhz/kube-proxy id=0dcf07cd-b7b4-4c0c-93cb-d99a714a8082 name=/runtime.v1.RuntimeService/StartContainer sandboxID=096c20ff3c59cebacc4a3b73cbc529c8475552518e07e212e20f1851d3c34657
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	312547cdc8bbb       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   3 seconds ago       Running             kindnet-cni               1                   3a6744aebdfd2       kindnet-p5kwz                               kube-system
	9e93e53fa02b3       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   3 seconds ago       Running             kube-proxy                1                   096c20ff3c59c       kube-proxy-9djhz                            kube-system
	470a11b3e6288       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   6 seconds ago       Running             kube-scheduler            1                   a95daa4a638e8       kube-scheduler-newest-cni-843779            kube-system
	76f4729b16b0d       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   6 seconds ago       Running             kube-controller-manager   1                   15cab2dee57a3       kube-controller-manager-newest-cni-843779   kube-system
	8337df843b206       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   6 seconds ago       Running             kube-apiserver            1                   442ed3779581c       kube-apiserver-newest-cni-843779            kube-system
	598e38003a29a       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   6 seconds ago       Running             etcd                      1                   9361fb3c4a977       etcd-newest-cni-843779                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-843779
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-843779
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=newest-cni-843779
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_26_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:26:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-843779
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:27:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:27:22 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:27:22 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:27:22 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 02:27:22 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-843779
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                d81a3470-fe6c-4f6c-853e-984980245e0f
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-843779                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-p5kwz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-843779             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-843779    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-9djhz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-843779             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node newest-cni-843779 event: Registered Node newest-cni-843779 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-843779 event: Registered Node newest-cni-843779 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [598e38003a29ae61f6432b2965e523888a3cd93289a7772ba60d488c231739dd] <==
	{"level":"info","ts":"2026-01-10T02:27:21.237944Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:27:21.238628Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:27:21.238635Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:27:21.238780Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:27:21.238938Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:27:21.238602Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:27:21.239469Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:27:21.830156Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:27:21.830204Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:27:21.830297Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:27:21.830318Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:27:21.830355Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:27:21.831000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:27:21.831036Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:27:21.831055Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:27:21.831062Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:27:21.832043Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-843779 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:27:21.832043Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:27:21.832065Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:27:21.832335Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:27:21.832370Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:27:21.833727Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:27:21.833792Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:27:21.836406Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:27:21.836500Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 02:27:27 up  1:09,  0 user,  load average: 3.28, 3.40, 2.40
	Linux newest-cni-843779 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387] <==
	I0110 02:27:24.309115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:27:24.309505       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:27:24.309662       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:27:24.309689       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:27:24.309719       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:27:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:27:24.506588       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:27:24.506621       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:27:24.506656       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:27:24.507787       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:27:24.806854       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:27:24.806909       1 metrics.go:72] Registering metrics
	I0110 02:27:24.806977       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8337df843b206d2a93e6ebac0e91e1a58ec130dd0b50f6aaeea1220cb9f6449b] <==
	I0110 02:27:22.741644       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:27:22.741103       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.741716       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:27:22.741743       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:27:22.741784       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:27:22.741814       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.741789       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.741763       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:27:22.745451       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:27:22.741089       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:27:22.741130       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:27:22.752642       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:27:22.768741       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:27:22.776999       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:27:22.989593       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:27:23.013391       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:27:23.028343       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:27:23.034787       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:27:23.039972       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:27:23.067119       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.143.22"}
	I0110 02:27:23.075671       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.73.102"}
	I0110 02:27:23.643745       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:27:26.338520       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:27:26.389671       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:27:26.489440       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [76f4729b16b0dabcb7fc6d98e95915aa779a343d0b22a534dbeb7682cfad0613] <==
	I0110 02:27:25.891923       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892416       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892618       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892914       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892928       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893324       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893447       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893663       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893754       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893819       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893916       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.894002       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.894151       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.894287       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.894601       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893792       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.895296       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.895599       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892918       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.900636       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.901532       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:25.993024       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.993049       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:27:25.993056       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:27:26.001683       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2] <==
	I0110 02:27:24.041966       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:27:24.100392       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:24.200758       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:24.200790       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 02:27:24.200903       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:27:24.220698       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:27:24.220752       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:27:24.225769       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:27:24.226117       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:27:24.226156       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:27:24.227343       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:27:24.227374       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:27:24.227426       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:27:24.227437       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:27:24.227426       1 config.go:200] "Starting service config controller"
	I0110 02:27:24.227451       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:27:24.227675       1 config.go:309] "Starting node config controller"
	I0110 02:27:24.227691       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:27:24.227699       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:27:24.327593       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:27:24.327593       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:27:24.327595       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [470a11b3e628822295b4651043fb6903544751366fe277f30a0931ce7748b97e] <==
	I0110 02:27:21.418424       1 serving.go:386] Generated self-signed cert in-memory
	I0110 02:27:22.731491       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:27:22.731522       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:27:22.736378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:27:22.736386       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0110 02:27:22.736441       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:22.736426       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0110 02:27:22.736452       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:22.736461       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:22.736546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:27:22.736929       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:27:22.837088       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.837129       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.837301       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:27:22 newest-cni-843779 kubelet[677]: E0110 02:27:22.810459     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-843779" containerName="kube-apiserver"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.652540     677 apiserver.go:52] "Watching apiserver"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.656691     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-843779" containerName="kube-controller-manager"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.658146     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679085     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-xtables-lock\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679140     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-cni-cfg\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679174     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-lib-modules\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679193     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d97a2a8a-cfa4-414f-ad6d-47af95479498-xtables-lock\") pod \"kube-proxy-9djhz\" (UID: \"d97a2a8a-cfa4-414f-ad6d-47af95479498\") " pod="kube-system/kube-proxy-9djhz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679276     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d97a2a8a-cfa4-414f-ad6d-47af95479498-lib-modules\") pod \"kube-proxy-9djhz\" (UID: \"d97a2a8a-cfa4-414f-ad6d-47af95479498\") " pod="kube-system/kube-proxy-9djhz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.689432     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.689580     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.689723     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.695467     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-843779\" already exists" pod="kube-system/etcd-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.695558     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-843779" containerName="etcd"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.696173     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-843779\" already exists" pod="kube-system/kube-apiserver-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.696181     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-843779\" already exists" pod="kube-system/kube-scheduler-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.696260     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-843779" containerName="kube-apiserver"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.696303     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-843779" containerName="kube-scheduler"
	Jan 10 02:27:24 newest-cni-843779 kubelet[677]: E0110 02:27:24.695145     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-843779" containerName="kube-scheduler"
	Jan 10 02:27:24 newest-cni-843779 kubelet[677]: E0110 02:27:24.695293     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-843779" containerName="etcd"
	Jan 10 02:27:24 newest-cni-843779 kubelet[677]: E0110 02:27:24.695631     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-843779" containerName="kube-apiserver"
	Jan 10 02:27:25 newest-cni-843779 kubelet[677]: I0110 02:27:25.167728     677 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:27:25 newest-cni-843779 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:27:25 newest-cni-843779 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:27:25 newest-cni-843779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843779 -n newest-cni-843779
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843779 -n newest-cni-843779: exit status 2 (309.9627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-843779 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-zmtqf storage-provisioner dashboard-metrics-scraper-867fb5f87b-rzhh8 kubernetes-dashboard-b84665fb8-6kp58
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner dashboard-metrics-scraper-867fb5f87b-rzhh8 kubernetes-dashboard-b84665fb8-6kp58
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner dashboard-metrics-scraper-867fb5f87b-rzhh8 kubernetes-dashboard-b84665fb8-6kp58: exit status 1 (57.827046ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-zmtqf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-rzhh8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6kp58" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner dashboard-metrics-scraper-867fb5f87b-rzhh8 kubernetes-dashboard-b84665fb8-6kp58: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-843779
helpers_test.go:244: (dbg) docker inspect newest-cni-843779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2",
	        "Created": "2026-01-10T02:26:43.222970574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 348900,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:27:13.620303381Z",
	            "FinishedAt": "2026-01-10T02:27:12.827028957Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/hosts",
	        "LogPath": "/var/lib/docker/containers/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2/d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2-json.log",
	        "Name": "/newest-cni-843779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-843779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-843779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1a10faa6dbcaeffc3d94eb50797c5d0953a61f2ef36d1f8f5bfb9a552a373a2",
	                "LowerDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1-init/diff:/var/lib/docker/overlay2/00d23c93affb69bafd924d890c7f36a7beca0336ba9654dc7771662e6302abe7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e66a24c2044fa3792d337a6f3867b9405f23bf9d3ffbc9ac4b060d4238d731b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-843779",
	                "Source": "/var/lib/docker/volumes/newest-cni-843779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-843779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-843779",
	                "name.minikube.sigs.k8s.io": "newest-cni-843779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cc3a29192986fbbad37fcbc9139897ad11a3a4dd4bd1306498241e94272fc981",
	            "SandboxKey": "/var/run/docker/netns/cc3a29192986",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-843779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be608ab0d563e9be569420f07b60320d59caab1b6e3a4268ebcdc8a31d692309",
	                    "EndpointID": "02e626343f52103f2f4aa012d3415604befd9c6fd7c72e9ee3f974c62972ff2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "06:9d:de:01:7f:c3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-843779",
	                        "d1a10faa6dbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843779 -n newest-cni-843779
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843779 -n newest-cni-843779: exit status 2 (315.131816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-843779 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ old-k8s-version-188604 image list --format=json                                                                                                                                                                                               │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p old-k8s-version-188604 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ embed-certs-872415 image list --format=json                                                                                                                                                                                                   │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p embed-certs-872415 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p old-k8s-version-188604                                                                                                                                                                                                                     │ old-k8s-version-188604       │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:27 UTC │
	│ image   │ no-preload-190877 image list --format=json                                                                                                                                                                                                    │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ pause   │ -p no-preload-190877 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │                     │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p embed-certs-872415                                                                                                                                                                                                                         │ embed-certs-872415           │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ delete  │ -p no-preload-190877                                                                                                                                                                                                                          │ no-preload-190877            │ jenkins │ v1.37.0 │ 10 Jan 26 02:26 UTC │ 10 Jan 26 02:26 UTC │
	│ image   │ default-k8s-diff-port-313784 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ pause   │ -p default-k8s-diff-port-313784 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	│ stop    │ -p newest-cni-843779 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ delete  │ -p default-k8s-diff-port-313784                                                                                                                                                                                                               │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ delete  │ -p default-k8s-diff-port-313784                                                                                                                                                                                                               │ default-k8s-diff-port-313784 │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ addons  │ enable dashboard -p newest-cni-843779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ start   │ -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ image   │ newest-cni-843779 image list --format=json                                                                                                                                                                                                    │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │ 10 Jan 26 02:27 UTC │
	│ pause   │ -p newest-cni-843779 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-843779            │ jenkins │ v1.37.0 │ 10 Jan 26 02:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:27:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:27:13.412834  348700 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:27:13.413108  348700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:27:13.413117  348700 out.go:374] Setting ErrFile to fd 2...
	I0110 02:27:13.413121  348700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:27:13.413295  348700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:27:13.413727  348700 out.go:368] Setting JSON to false
	I0110 02:27:13.414655  348700 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4182,"bootTime":1768007851,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:27:13.414705  348700 start.go:143] virtualization: kvm guest
	I0110 02:27:13.416288  348700 out.go:179] * [newest-cni-843779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:27:13.417374  348700 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:27:13.417382  348700 notify.go:221] Checking for updates...
	I0110 02:27:13.419395  348700 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:27:13.420577  348700 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:13.421552  348700 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:27:13.422548  348700 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:27:13.423445  348700 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:27:13.424748  348700 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:13.425236  348700 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:27:13.447632  348700 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:27:13.447749  348700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:27:13.500432  348700 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:44 SystemTime:2026-01-10 02:27:13.490190521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:27:13.500529  348700 docker.go:319] overlay module found
	I0110 02:27:13.501880  348700 out.go:179] * Using the docker driver based on existing profile
	I0110 02:27:13.502913  348700 start.go:309] selected driver: docker
	I0110 02:27:13.502925  348700 start.go:928] validating driver "docker" against &{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:27:13.503005  348700 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:27:13.503497  348700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:27:13.553816  348700 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:44 SystemTime:2026-01-10 02:27:13.544733156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:27:13.554104  348700 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:27:13.554129  348700 cni.go:84] Creating CNI manager for ""
	I0110 02:27:13.554178  348700 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:27:13.554206  348700 start.go:353] cluster config:
	{Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:27:13.555539  348700 out.go:179] * Starting "newest-cni-843779" primary control-plane node in "newest-cni-843779" cluster
	I0110 02:27:13.556539  348700 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:27:13.557528  348700 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:27:13.558450  348700 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:27:13.558487  348700 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 02:27:13.558498  348700 cache.go:65] Caching tarball of preloaded images
	I0110 02:27:13.558547  348700 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:27:13.558588  348700 preload.go:251] Found /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 02:27:13.558603  348700 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:27:13.558728  348700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:27:13.577350  348700 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:27:13.577367  348700 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:27:13.577381  348700 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:27:13.577405  348700 start.go:360] acquireMachinesLock for newest-cni-843779: {Name:mk323a284e6d1fbe60648cadd708de40d28e2eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:27:13.577471  348700 start.go:364] duration metric: took 37.71µs to acquireMachinesLock for "newest-cni-843779"
	I0110 02:27:13.577488  348700 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:27:13.577492  348700 fix.go:54] fixHost starting: 
	I0110 02:27:13.577672  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:13.595344  348700 fix.go:112] recreateIfNeeded on newest-cni-843779: state=Stopped err=<nil>
	W0110 02:27:13.595382  348700 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:27:13.596809  348700 out.go:252] * Restarting existing docker container for "newest-cni-843779" ...
	I0110 02:27:13.596858  348700 cli_runner.go:164] Run: docker start newest-cni-843779
	I0110 02:27:13.819844  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:13.838060  348700 kic.go:430] container "newest-cni-843779" state is running.
	I0110 02:27:13.838482  348700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:27:13.856205  348700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/config.json ...
	I0110 02:27:13.856370  348700 machine.go:94] provisionDockerMachine start ...
	I0110 02:27:13.856426  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:13.875477  348700 main.go:144] libmachine: Using SSH client type: native
	I0110 02:27:13.875737  348700 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I0110 02:27:13.875751  348700 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:27:13.876381  348700 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49606->127.0.0.1:33135: read: connection reset by peer
	I0110 02:27:17.003548  348700 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:27:17.003605  348700 ubuntu.go:182] provisioning hostname "newest-cni-843779"
	I0110 02:27:17.003702  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.021557  348700 main.go:144] libmachine: Using SSH client type: native
	I0110 02:27:17.021777  348700 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I0110 02:27:17.021791  348700 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-843779 && echo "newest-cni-843779" | sudo tee /etc/hostname
	I0110 02:27:17.152935  348700 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-843779
	
	I0110 02:27:17.153006  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.170223  348700 main.go:144] libmachine: Using SSH client type: native
	I0110 02:27:17.170515  348700 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I0110 02:27:17.170538  348700 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-843779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-843779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-843779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:27:17.294158  348700 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:27:17.294183  348700 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-10552/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-10552/.minikube}
	I0110 02:27:17.294203  348700 ubuntu.go:190] setting up certificates
	I0110 02:27:17.294217  348700 provision.go:84] configureAuth start
	I0110 02:27:17.294261  348700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:27:17.312494  348700 provision.go:143] copyHostCerts
	I0110 02:27:17.312546  348700 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem, removing ...
	I0110 02:27:17.312560  348700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem
	I0110 02:27:17.312627  348700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/cert.pem (1123 bytes)
	I0110 02:27:17.312777  348700 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem, removing ...
	I0110 02:27:17.312788  348700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem
	I0110 02:27:17.312815  348700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/key.pem (1675 bytes)
	I0110 02:27:17.312896  348700 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem, removing ...
	I0110 02:27:17.312905  348700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem
	I0110 02:27:17.312936  348700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-10552/.minikube/ca.pem (1082 bytes)
	I0110 02:27:17.313012  348700 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem org=jenkins.newest-cni-843779 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-843779]
	I0110 02:27:17.335669  348700 provision.go:177] copyRemoteCerts
	I0110 02:27:17.335719  348700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:27:17.335762  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.352837  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:17.444471  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:27:17.460985  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:27:17.477001  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:27:17.492804  348700 provision.go:87] duration metric: took 198.56873ms to configureAuth
	I0110 02:27:17.492834  348700 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:27:17.493025  348700 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:17.493133  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.510543  348700 main.go:144] libmachine: Using SSH client type: native
	I0110 02:27:17.510750  348700 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I0110 02:27:17.510768  348700 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:27:17.781191  348700 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:27:17.781214  348700 machine.go:97] duration metric: took 3.924831738s to provisionDockerMachine
	I0110 02:27:17.781224  348700 start.go:293] postStartSetup for "newest-cni-843779" (driver="docker")
	I0110 02:27:17.781234  348700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:27:17.781281  348700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:27:17.781316  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.799029  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:17.890728  348700 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:27:17.894112  348700 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:27:17.894133  348700 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:27:17.894142  348700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/addons for local assets ...
	I0110 02:27:17.894187  348700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-10552/.minikube/files for local assets ...
	I0110 02:27:17.894254  348700 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem -> 140862.pem in /etc/ssl/certs
	I0110 02:27:17.894345  348700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:27:17.901469  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:27:17.917829  348700 start.go:296] duration metric: took 136.594255ms for postStartSetup
	I0110 02:27:17.917920  348700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:27:17.917959  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:17.935974  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:18.025528  348700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:27:18.029828  348700 fix.go:56] duration metric: took 4.452329842s for fixHost
	I0110 02:27:18.029853  348700 start.go:83] releasing machines lock for "newest-cni-843779", held for 4.452371078s
	I0110 02:27:18.029928  348700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843779
	I0110 02:27:18.047243  348700 ssh_runner.go:195] Run: cat /version.json
	I0110 02:27:18.047290  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:18.047330  348700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:27:18.047398  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:18.066550  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:18.066893  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:19.469781  348700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (1.422419408s)
	I0110 02:27:19.469861  348700 ssh_runner.go:235] Completed: cat /version.json: (1.422590338s)
	I0110 02:27:19.470030  348700 ssh_runner.go:195] Run: systemctl --version
	I0110 02:27:19.476476  348700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:27:19.509832  348700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:27:19.514308  348700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:27:19.514362  348700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:27:19.521936  348700 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:27:19.521960  348700 start.go:496] detecting cgroup driver to use...
	I0110 02:27:19.521996  348700 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 02:27:19.522045  348700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:27:19.536177  348700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:27:19.547454  348700 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:27:19.547509  348700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:27:19.560409  348700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:27:19.571287  348700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:27:19.647180  348700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:27:19.722648  348700 docker.go:234] disabling docker service ...
	I0110 02:27:19.722708  348700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:27:19.735703  348700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:27:19.746704  348700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:27:19.822317  348700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:27:19.901182  348700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:27:19.912432  348700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:27:19.925394  348700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:27:19.925438  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.933552  348700 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:27:19.933606  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.942063  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.950172  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.958533  348700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:27:19.966227  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.974518  348700 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.982480  348700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:27:19.990530  348700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:27:19.997290  348700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:27:20.004145  348700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:20.077918  348700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:27:20.202996  348700 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:27:20.203049  348700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:27:20.206756  348700 start.go:574] Will wait 60s for crictl version
	I0110 02:27:20.206824  348700 ssh_runner.go:195] Run: which crictl
	I0110 02:27:20.210112  348700 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:27:20.233516  348700 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:27:20.233585  348700 ssh_runner.go:195] Run: crio --version
	I0110 02:27:20.259284  348700 ssh_runner.go:195] Run: crio --version
	I0110 02:27:20.287176  348700 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:27:20.288379  348700 cli_runner.go:164] Run: docker network inspect newest-cni-843779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:27:20.305317  348700 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:27:20.309249  348700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:27:20.320346  348700 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:27:20.321353  348700 kubeadm.go:884] updating cluster {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:27:20.321523  348700 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:27:20.321586  348700 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:27:20.355710  348700 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:27:20.355731  348700 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:27:20.355782  348700 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:27:20.380250  348700 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:27:20.380270  348700 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:27:20.380276  348700 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:27:20.380367  348700 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-843779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:27:20.380436  348700 ssh_runner.go:195] Run: crio config
	I0110 02:27:20.422479  348700 cni.go:84] Creating CNI manager for ""
	I0110 02:27:20.422500  348700 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:27:20.422516  348700 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:27:20.422538  348700 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-843779 NodeName:newest-cni-843779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:27:20.423143  348700 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-843779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:27:20.423234  348700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:27:20.431905  348700 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:27:20.431961  348700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:27:20.439199  348700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:27:20.450857  348700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:27:20.462208  348700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 02:27:20.473604  348700 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:27:20.476806  348700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:27:20.485751  348700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:20.567596  348700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:27:20.591080  348700 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779 for IP: 192.168.85.2
	I0110 02:27:20.591104  348700 certs.go:195] generating shared ca certs ...
	I0110 02:27:20.591123  348700 certs.go:227] acquiring lock for ca certs: {Name:mk0b415533cec596b4d3cf91d9814c0f790259aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:20.591247  348700 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key
	I0110 02:27:20.591286  348700 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key
	I0110 02:27:20.591295  348700 certs.go:257] generating profile certs ...
	I0110 02:27:20.591394  348700 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/client.key
	I0110 02:27:20.591456  348700 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key.80ef10c5
	I0110 02:27:20.591495  348700 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key
	I0110 02:27:20.591605  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem (1338 bytes)
	W0110 02:27:20.591636  348700 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086_empty.pem, impossibly tiny 0 bytes
	I0110 02:27:20.591646  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:27:20.591670  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:27:20.591695  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:27:20.591720  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/certs/key.pem (1675 bytes)
	I0110 02:27:20.591761  348700 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem (1708 bytes)
	I0110 02:27:20.592406  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:27:20.610962  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:27:20.629119  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:27:20.648276  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:27:20.669552  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:27:20.687110  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:27:20.702878  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:27:20.718615  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/newest-cni-843779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:27:20.734371  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:27:20.750009  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/certs/14086.pem --> /usr/share/ca-certificates/14086.pem (1338 bytes)
	I0110 02:27:20.765836  348700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/ssl/certs/140862.pem --> /usr/share/ca-certificates/140862.pem (1708 bytes)
	I0110 02:27:20.782375  348700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:27:20.793896  348700 ssh_runner.go:195] Run: openssl version
	I0110 02:27:20.799742  348700 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:27:20.806576  348700 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:27:20.813425  348700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:27:20.816847  348700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:53 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:27:20.816900  348700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:27:20.850170  348700 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:27:20.857485  348700 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14086.pem
	I0110 02:27:20.864445  348700 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14086.pem /etc/ssl/certs/14086.pem
	I0110 02:27:20.871151  348700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14086.pem
	I0110 02:27:20.874455  348700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:56 /usr/share/ca-certificates/14086.pem
	I0110 02:27:20.874499  348700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14086.pem
	I0110 02:27:20.907366  348700 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:27:20.914005  348700 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/140862.pem
	I0110 02:27:20.920577  348700 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/140862.pem /etc/ssl/certs/140862.pem
	I0110 02:27:20.927314  348700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140862.pem
	I0110 02:27:20.930578  348700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:56 /usr/share/ca-certificates/140862.pem
	I0110 02:27:20.930617  348700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140862.pem
	I0110 02:27:20.965180  348700 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:27:20.973127  348700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:27:20.976785  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:27:21.009953  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:27:21.043044  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:27:21.076437  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:27:21.119666  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:27:21.164341  348700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:27:21.214751  348700 kubeadm.go:401] StartCluster: {Name:newest-cni-843779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-843779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:27:21.214861  348700 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:27:21.214934  348700 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:27:21.252879  348700 cri.go:96] found id: "470a11b3e628822295b4651043fb6903544751366fe277f30a0931ce7748b97e"
	I0110 02:27:21.252925  348700 cri.go:96] found id: "76f4729b16b0dabcb7fc6d98e95915aa779a343d0b22a534dbeb7682cfad0613"
	I0110 02:27:21.252931  348700 cri.go:96] found id: "8337df843b206d2a93e6ebac0e91e1a58ec130dd0b50f6aaeea1220cb9f6449b"
	I0110 02:27:21.252935  348700 cri.go:96] found id: "598e38003a29ae61f6432b2965e523888a3cd93289a7772ba60d488c231739dd"
	I0110 02:27:21.252940  348700 cri.go:96] found id: ""
	I0110 02:27:21.252986  348700 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:27:21.264469  348700 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:27:21Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:27:21.264626  348700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:27:21.272145  348700 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:27:21.272160  348700 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:27:21.272196  348700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:27:21.279020  348700 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:27:21.279416  348700 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-843779" does not appear in /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:21.279537  348700 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-10552/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-843779" cluster setting kubeconfig missing "newest-cni-843779" context setting]
	I0110 02:27:21.279826  348700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:21.281041  348700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:27:21.288090  348700 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 02:27:21.288112  348700 kubeadm.go:602] duration metric: took 15.947367ms to restartPrimaryControlPlane
	I0110 02:27:21.288120  348700 kubeadm.go:403] duration metric: took 73.381828ms to StartCluster
	I0110 02:27:21.288146  348700 settings.go:142] acquiring lock: {Name:mk2a01746ce6538db92ca35d706f43bb78bbaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:21.288199  348700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:27:21.288673  348700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-10552/kubeconfig: {Name:mk8430a4782f139ab83149680c0d79371f7246f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:27:21.288870  348700 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:27:21.289024  348700 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:27:21.289103  348700 config.go:182] Loaded profile config "newest-cni-843779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:27:21.289130  348700 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-843779"
	I0110 02:27:21.289151  348700 addons.go:70] Setting default-storageclass=true in profile "newest-cni-843779"
	I0110 02:27:21.289166  348700 addons.go:70] Setting dashboard=true in profile "newest-cni-843779"
	I0110 02:27:21.289194  348700 addons.go:239] Setting addon dashboard=true in "newest-cni-843779"
	W0110 02:27:21.289206  348700 addons.go:248] addon dashboard should already be in state true
	I0110 02:27:21.289234  348700 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:21.289180  348700 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-843779"
	I0110 02:27:21.289154  348700 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-843779"
	W0110 02:27:21.289286  348700 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:27:21.289306  348700 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:21.289560  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:21.289725  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:21.289728  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:21.291507  348700 out.go:179] * Verifying Kubernetes components...
	I0110 02:27:21.292720  348700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:27:21.313343  348700 addons.go:239] Setting addon default-storageclass=true in "newest-cni-843779"
	W0110 02:27:21.313366  348700 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:27:21.313392  348700 host.go:66] Checking if "newest-cni-843779" exists ...
	I0110 02:27:21.313757  348700 cli_runner.go:164] Run: docker container inspect newest-cni-843779 --format={{.State.Status}}
	I0110 02:27:21.314233  348700 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:27:21.315153  348700 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:27:21.315211  348700 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:21.315227  348700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:27:21.315284  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:21.317164  348700 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:27:21.318115  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:27:21.318130  348700 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:27:21.318180  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:21.341783  348700 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:21.341804  348700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:27:21.341861  348700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843779
	I0110 02:27:21.342993  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:21.352476  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:21.368166  348700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/newest-cni-843779/id_rsa Username:docker}
	I0110 02:27:21.442386  348700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:27:21.454043  348700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:27:21.456758  348700 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:27:21.456813  348700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:27:21.460499  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:27:21.460512  348700 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:27:21.473816  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:27:21.473834  348700 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:27:21.475779  348700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:27:21.487704  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:27:21.487722  348700 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:27:21.500734  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:27:21.500756  348700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:27:21.513691  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:27:21.513712  348700 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:27:21.526659  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:27:21.526686  348700 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:27:21.539165  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:27:21.539190  348700 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:27:21.551121  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:27:21.551137  348700 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:27:21.563183  348700 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:27:21.563202  348700 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:27:21.574937  348700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:27:23.179076  348700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.725000195s)
	I0110 02:27:23.179147  348700 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.722310802s)
	I0110 02:27:23.179187  348700 api_server.go:72] duration metric: took 1.8902513s to wait for apiserver process to appear ...
	I0110 02:27:23.179197  348700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.703390497s)
	I0110 02:27:23.179200  348700 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:27:23.179328  348700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.6043615s)
	I0110 02:27:23.179330  348700 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:23.180734  348700 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-843779 addons enable metrics-server
	
	I0110 02:27:23.186643  348700 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:27:23.186666  348700 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:27:23.192112  348700 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:27:23.193113  348700 addons.go:530] duration metric: took 1.90409785s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:27:23.679812  348700 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:23.684847  348700 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:27:23.684872  348700 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:27:24.179482  348700 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 02:27:24.183404  348700 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 02:27:24.184315  348700 api_server.go:141] control plane version: v1.35.0
	I0110 02:27:24.184336  348700 api_server.go:131] duration metric: took 1.005018895s to wait for apiserver health ...
	I0110 02:27:24.184344  348700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:27:24.187571  348700 system_pods.go:59] 8 kube-system pods found
	I0110 02:27:24.187603  348700 system_pods.go:61] "coredns-7d764666f9-zmtqf" [bab0ce6c-6845-4a76-aba8-25902122e535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:24.187616  348700 system_pods.go:61] "etcd-newest-cni-843779" [fdd4d85a-8248-4455-82c1-256311f58e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:27:24.187628  348700 system_pods.go:61] "kindnet-p5kwz" [a4006850-95c0-4567-9f85-7914adcf599d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:27:24.187642  348700 system_pods.go:61] "kube-apiserver-newest-cni-843779" [6c2775ff-47fa-4806-9434-1cf525435963] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:27:24.187652  348700 system_pods.go:61] "kube-controller-manager-newest-cni-843779" [3d61a2c1-6564-4d15-9c8b-1eaefd4c6878] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:27:24.187665  348700 system_pods.go:61] "kube-proxy-9djhz" [d97a2a8a-cfa4-414f-ad6d-47af95479498] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:27:24.187676  348700 system_pods.go:61] "kube-scheduler-newest-cni-843779" [b6848e97-9fd5-4a56-b28d-0f581cc698b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:27:24.187683  348700 system_pods.go:61] "storage-provisioner" [4f1dd65f-c7de-48ab-8d72-fcc925bbd6be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:27:24.187701  348700 system_pods.go:74] duration metric: took 3.351677ms to wait for pod list to return data ...
	I0110 02:27:24.187710  348700 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:27:24.189782  348700 default_sa.go:45] found service account: "default"
	I0110 02:27:24.189797  348700 default_sa.go:55] duration metric: took 2.080785ms for default service account to be created ...
	I0110 02:27:24.189806  348700 kubeadm.go:587] duration metric: took 2.900872401s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:27:24.189823  348700 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:27:24.191534  348700 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 02:27:24.191559  348700 node_conditions.go:123] node cpu capacity is 8
	I0110 02:27:24.191575  348700 node_conditions.go:105] duration metric: took 1.746819ms to run NodePressure ...
	I0110 02:27:24.191591  348700 start.go:242] waiting for startup goroutines ...
	I0110 02:27:24.191600  348700 start.go:247] waiting for cluster config update ...
	I0110 02:27:24.191614  348700 start.go:256] writing updated cluster config ...
	I0110 02:27:24.191933  348700 ssh_runner.go:195] Run: rm -f paused
	I0110 02:27:24.242149  348700 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 02:27:24.244130  348700 out.go:179] * Done! kubectl is now configured to use "newest-cni-843779" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.961546829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.964694623Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9252035b-7d3c-4db6-8b6d-5f9549d11eea name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.965427953Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=db76147f-b4b8-4051-9e4b-6f9b329fceb8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.966328034Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.966827913Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.967308521Z" level=info msg="Ran pod sandbox 3a6744aebdfd2e40860fdb6bca2c297b2e560bba89b94ecb7144eeae936467ee with infra container: kube-system/kindnet-p5kwz/POD" id=9252035b-7d3c-4db6-8b6d-5f9549d11eea name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.967492287Z" level=info msg="Ran pod sandbox 096c20ff3c59cebacc4a3b73cbc529c8475552518e07e212e20f1851d3c34657 with infra container: kube-system/kube-proxy-9djhz/POD" id=db76147f-b4b8-4051-9e4b-6f9b329fceb8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.96848847Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=a9d41207-7cc3-4a18-9aaf-9e184e69b415 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.968490926Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=2d516e0b-8e8f-4367-880d-2222f7e8dad9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.969525919Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=139ae996-f29f-499e-a9fd-72d42309eed2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.969527629Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=5b533515-5ad9-42a7-8686-f38e888e4c91 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.970689436Z" level=info msg="Creating container: kube-system/kube-proxy-9djhz/kube-proxy" id=041ca2b1-c16f-4f57-bc3c-010264a5cee8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.970820251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.970826251Z" level=info msg="Creating container: kube-system/kindnet-p5kwz/kindnet-cni" id=0bf2ddc7-c6b5-41c2-912a-00a601e0d31c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.970989244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.975499147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.976090412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.976248457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:23 newest-cni-843779 crio[524]: time="2026-01-10T02:27:23.976807881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.004161828Z" level=info msg="Created container 312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387: kube-system/kindnet-p5kwz/kindnet-cni" id=0bf2ddc7-c6b5-41c2-912a-00a601e0d31c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.004592561Z" level=info msg="Starting container: 312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387" id=460a8110-589d-47e6-8504-c50598e57621 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.00618194Z" level=info msg="Started container" PID=1054 containerID=312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387 description=kube-system/kindnet-p5kwz/kindnet-cni id=460a8110-589d-47e6-8504-c50598e57621 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a6744aebdfd2e40860fdb6bca2c297b2e560bba89b94ecb7144eeae936467ee
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.006779531Z" level=info msg="Created container 9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2: kube-system/kube-proxy-9djhz/kube-proxy" id=041ca2b1-c16f-4f57-bc3c-010264a5cee8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.007204103Z" level=info msg="Starting container: 9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2" id=0dcf07cd-b7b4-4c0c-93cb-d99a714a8082 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:27:24 newest-cni-843779 crio[524]: time="2026-01-10T02:27:24.009644991Z" level=info msg="Started container" PID=1055 containerID=9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2 description=kube-system/kube-proxy-9djhz/kube-proxy id=0dcf07cd-b7b4-4c0c-93cb-d99a714a8082 name=/runtime.v1.RuntimeService/StartContainer sandboxID=096c20ff3c59cebacc4a3b73cbc529c8475552518e07e212e20f1851d3c34657
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	312547cdc8bbb       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   5 seconds ago       Running             kindnet-cni               1                   3a6744aebdfd2       kindnet-p5kwz                               kube-system
	9e93e53fa02b3       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   5 seconds ago       Running             kube-proxy                1                   096c20ff3c59c       kube-proxy-9djhz                            kube-system
	470a11b3e6288       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   8 seconds ago       Running             kube-scheduler            1                   a95daa4a638e8       kube-scheduler-newest-cni-843779            kube-system
	76f4729b16b0d       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   8 seconds ago       Running             kube-controller-manager   1                   15cab2dee57a3       kube-controller-manager-newest-cni-843779   kube-system
	8337df843b206       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   8 seconds ago       Running             kube-apiserver            1                   442ed3779581c       kube-apiserver-newest-cni-843779            kube-system
	598e38003a29a       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   8 seconds ago       Running             etcd                      1                   9361fb3c4a977       etcd-newest-cni-843779                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-843779
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-843779
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=newest-cni-843779
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_26_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:26:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-843779
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:27:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:27:22 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:27:22 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:27:22 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 02:27:22 +0000   Sat, 10 Jan 2026 02:26:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-843779
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                d81a3470-fe6c-4f6c-853e-984980245e0f
	  Boot ID:                    4e1c4f14-232f-4f69-b522-cd3c3c918c1c
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-843779                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-p5kwz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-843779             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-843779    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-9djhz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-843779             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node newest-cni-843779 event: Registered Node newest-cni-843779 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-843779 event: Registered Node newest-cni-843779 in Controller
	
	
	==> dmesg <==
	[  +5.251360] kauditd_printk_skb: 47 callbacks suppressed
	[Jan10 02:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[Jan10 02:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b5 04 99 4d 55 08 06
	[  +0.000555] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 45 fe 54 19 e8 08 06
	[  +6.807824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[ +38.135886] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[  +0.723513] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	[  +7.502256] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e bc 6a 16 01 6a 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a d6 5f 3b e3 a1 08 06
	[Jan10 02:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 60 ce d9 b9 94 08 06
	[  +0.000448] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 3d 48 8c 2d ec 08 06
	[ +34.501004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea c0 ff f2 f1 29 08 06
	[  +0.000400] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 3c 10 19 81 47 08 06
	
	
	==> etcd [598e38003a29ae61f6432b2965e523888a3cd93289a7772ba60d488c231739dd] <==
	{"level":"info","ts":"2026-01-10T02:27:21.237944Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:27:21.238628Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:27:21.238635Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:27:21.238780Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:27:21.238938Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:27:21.238602Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:27:21.239469Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:27:21.830156Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:27:21.830204Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:27:21.830297Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:27:21.830318Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:27:21.830355Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:27:21.831000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:27:21.831036Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:27:21.831055Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:27:21.831062Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:27:21.832043Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-843779 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:27:21.832043Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:27:21.832065Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:27:21.832335Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:27:21.832370Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:27:21.833727Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:27:21.833792Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:27:21.836406Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:27:21.836500Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 02:27:29 up  1:09,  0 user,  load average: 3.28, 3.40, 2.40
	Linux newest-cni-843779 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [312547cdc8bbb7811eea31159c0e75bf75a8e32c10a2bbfe0698feebbdea6387] <==
	I0110 02:27:24.309115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:27:24.309505       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:27:24.309662       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:27:24.309689       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:27:24.309719       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:27:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:27:24.506588       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:27:24.506621       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:27:24.506656       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:27:24.507787       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:27:24.806854       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:27:24.806909       1 metrics.go:72] Registering metrics
	I0110 02:27:24.806977       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8337df843b206d2a93e6ebac0e91e1a58ec130dd0b50f6aaeea1220cb9f6449b] <==
	I0110 02:27:22.741644       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:27:22.741103       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.741716       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:27:22.741743       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:27:22.741784       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:27:22.741814       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.741789       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.741763       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:27:22.745451       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:27:22.741089       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:27:22.741130       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:27:22.752642       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:27:22.768741       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:27:22.776999       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:27:22.989593       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:27:23.013391       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:27:23.028343       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:27:23.034787       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:27:23.039972       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:27:23.067119       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.143.22"}
	I0110 02:27:23.075671       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.73.102"}
	I0110 02:27:23.643745       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:27:26.338520       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:27:26.389671       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:27:26.489440       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [76f4729b16b0dabcb7fc6d98e95915aa779a343d0b22a534dbeb7682cfad0613] <==
	I0110 02:27:25.891923       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892416       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892618       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892914       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892928       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893324       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893447       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893663       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893754       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893819       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893916       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.894002       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.894151       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.894287       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.894601       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.893792       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.895296       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.895599       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.892918       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.900636       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.901532       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:25.993024       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:25.993049       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:27:25.993056       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:27:26.001683       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9e93e53fa02b3f0e8e4b866b1a9c264ba142318d779c61fd572cfe073bb532a2] <==
	I0110 02:27:24.041966       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:27:24.100392       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:24.200758       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:24.200790       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 02:27:24.200903       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:27:24.220698       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:27:24.220752       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:27:24.225769       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:27:24.226117       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:27:24.226156       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:27:24.227343       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:27:24.227374       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:27:24.227426       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:27:24.227437       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:27:24.227426       1 config.go:200] "Starting service config controller"
	I0110 02:27:24.227451       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:27:24.227675       1 config.go:309] "Starting node config controller"
	I0110 02:27:24.227691       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:27:24.227699       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:27:24.327593       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:27:24.327593       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:27:24.327595       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [470a11b3e628822295b4651043fb6903544751366fe277f30a0931ce7748b97e] <==
	I0110 02:27:21.418424       1 serving.go:386] Generated self-signed cert in-memory
	I0110 02:27:22.731491       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:27:22.731522       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:27:22.736378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:27:22.736386       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0110 02:27:22.736441       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:22.736426       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0110 02:27:22.736452       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:22.736461       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:27:22.736546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:27:22.736929       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:27:22.837088       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.837129       1 shared_informer.go:377] "Caches are synced"
	I0110 02:27:22.837301       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:27:22 newest-cni-843779 kubelet[677]: E0110 02:27:22.810459     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-843779" containerName="kube-apiserver"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.652540     677 apiserver.go:52] "Watching apiserver"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.656691     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-843779" containerName="kube-controller-manager"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.658146     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679085     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-xtables-lock\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679140     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-cni-cfg\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679174     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4006850-95c0-4567-9f85-7914adcf599d-lib-modules\") pod \"kindnet-p5kwz\" (UID: \"a4006850-95c0-4567-9f85-7914adcf599d\") " pod="kube-system/kindnet-p5kwz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679193     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d97a2a8a-cfa4-414f-ad6d-47af95479498-xtables-lock\") pod \"kube-proxy-9djhz\" (UID: \"d97a2a8a-cfa4-414f-ad6d-47af95479498\") " pod="kube-system/kube-proxy-9djhz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.679276     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d97a2a8a-cfa4-414f-ad6d-47af95479498-lib-modules\") pod \"kube-proxy-9djhz\" (UID: \"d97a2a8a-cfa4-414f-ad6d-47af95479498\") " pod="kube-system/kube-proxy-9djhz"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.689432     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.689580     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: I0110 02:27:23.689723     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.695467     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-843779\" already exists" pod="kube-system/etcd-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.695558     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-843779" containerName="etcd"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.696173     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-843779\" already exists" pod="kube-system/kube-apiserver-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.696181     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-843779\" already exists" pod="kube-system/kube-scheduler-newest-cni-843779"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.696260     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-843779" containerName="kube-apiserver"
	Jan 10 02:27:23 newest-cni-843779 kubelet[677]: E0110 02:27:23.696303     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-843779" containerName="kube-scheduler"
	Jan 10 02:27:24 newest-cni-843779 kubelet[677]: E0110 02:27:24.695145     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-843779" containerName="kube-scheduler"
	Jan 10 02:27:24 newest-cni-843779 kubelet[677]: E0110 02:27:24.695293     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-843779" containerName="etcd"
	Jan 10 02:27:24 newest-cni-843779 kubelet[677]: E0110 02:27:24.695631     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-843779" containerName="kube-apiserver"
	Jan 10 02:27:25 newest-cni-843779 kubelet[677]: I0110 02:27:25.167728     677 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:27:25 newest-cni-843779 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:27:25 newest-cni-843779 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:27:25 newest-cni-843779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843779 -n newest-cni-843779
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843779 -n newest-cni-843779: exit status 2 (319.196269ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-843779 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-zmtqf storage-provisioner dashboard-metrics-scraper-867fb5f87b-rzhh8 kubernetes-dashboard-b84665fb8-6kp58
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner dashboard-metrics-scraper-867fb5f87b-rzhh8 kubernetes-dashboard-b84665fb8-6kp58
E0110 02:27:30.236263   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/kindnet-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner dashboard-metrics-scraper-867fb5f87b-rzhh8 kubernetes-dashboard-b84665fb8-6kp58: exit status 1 (56.728157ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-zmtqf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-rzhh8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6kp58" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-843779 describe pod coredns-7d764666f9-zmtqf storage-provisioner dashboard-metrics-scraper-867fb5f87b-rzhh8 kubernetes-dashboard-b84665fb8-6kp58: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.46s)

                                                
                                    

Test pass (279/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.05
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.66
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.79
22 TestOffline 58.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 90.89
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 8.43
48 TestAddons/StoppedEnableDisable 18.52
49 TestCertOptions 25.09
50 TestCertExpiration 209.91
52 TestForceSystemdFlag 25.89
53 TestForceSystemdEnv 24.28
58 TestErrorSpam/setup 15.76
59 TestErrorSpam/start 0.62
60 TestErrorSpam/status 0.94
61 TestErrorSpam/pause 6.84
62 TestErrorSpam/unpause 5.5
63 TestErrorSpam/stop 2.6
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 36.24
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.85
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.48
75 TestFunctional/serial/CacheCmd/cache/add_local 1.26
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 47.37
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.15
86 TestFunctional/serial/LogsFileCmd 1.15
87 TestFunctional/serial/InvalidService 4.1
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 23.65
91 TestFunctional/parallel/DryRun 0.39
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.98
97 TestFunctional/parallel/ServiceCmdConnect 15.82
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 21.23
101 TestFunctional/parallel/SSHCmd 0.52
102 TestFunctional/parallel/CpCmd 1.69
103 TestFunctional/parallel/MySQL 25.35
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.63
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.46
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.17
115 TestFunctional/parallel/Version/short 0.08
116 TestFunctional/parallel/Version/components 0.49
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.31
122 TestFunctional/parallel/ImageCommands/Setup 0.85
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.2
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.83
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.58
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
136 TestFunctional/parallel/ServiceCmd/List 0.33
137 TestFunctional/parallel/ProfileCmd/profile_list 0.41
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
141 TestFunctional/parallel/MountCmd/any-port 5.5
142 TestFunctional/parallel/ServiceCmd/Format 0.35
143 TestFunctional/parallel/ServiceCmd/URL 0.35
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
153 TestFunctional/parallel/MountCmd/specific-port 1.97
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.17
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 104.34
163 TestMultiControlPlane/serial/DeployApp 3.71
164 TestMultiControlPlane/serial/PingHostFromPods 0.97
165 TestMultiControlPlane/serial/AddWorkerNode 26.93
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
168 TestMultiControlPlane/serial/CopyFile 16.31
169 TestMultiControlPlane/serial/StopSecondaryNode 13.25
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.3
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.1
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.08
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
176 TestMultiControlPlane/serial/StopCluster 49
177 TestMultiControlPlane/serial/RestartCluster 54.66
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
179 TestMultiControlPlane/serial/AddSecondaryNode 32.32
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 38.94
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.97
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 23.05
211 TestKicCustomNetwork/use_default_bridge_network 22.56
212 TestKicExistingNetwork 20.08
213 TestKicCustomSubnet 23.53
214 TestKicStaticIP 23.14
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 38.94
219 TestMountStart/serial/StartWithMountFirst 4.6
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 4.56
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.65
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.28
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 62.79
231 TestMultiNode/serial/DeployApp2Nodes 2.96
232 TestMultiNode/serial/PingHostFrom2Pods 0.67
233 TestMultiNode/serial/AddNode 23.24
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.63
236 TestMultiNode/serial/CopyFile 9.26
237 TestMultiNode/serial/StopNode 2.2
238 TestMultiNode/serial/StartAfterStop 7
239 TestMultiNode/serial/RestartKeepsNodes 57.16
240 TestMultiNode/serial/DeleteNode 4.97
241 TestMultiNode/serial/StopMultiNode 17.6
242 TestMultiNode/serial/RestartMultiNode 45.2
243 TestMultiNode/serial/ValidateNameConflict 23.06
250 TestScheduledStopUnix 93.1
253 TestInsufficientStorage 8.55
254 TestRunningBinaryUpgrade 321.59
256 TestKubernetesUpgrade 310.79
257 TestMissingContainerUpgrade 77.09
258 TestPreload/Start-NoPreload-PullImage 65.99
259 TestStoppedBinaryUpgrade/Setup 0.79
260 TestStoppedBinaryUpgrade/Upgrade 64.71
262 TestPause/serial/Start 41.34
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
264 TestPreload/Restart-With-Preload-Check-User-Image 47.78
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
267 TestNoKubernetes/serial/StartWithK8s 18.85
268 TestNoKubernetes/serial/StartWithStopK8s 23.32
269 TestPause/serial/SecondStartNoReconfiguration 5.66
271 TestNoKubernetes/serial/Start 8.97
280 TestNetworkPlugins/group/false 4.74
281 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
283 TestNoKubernetes/serial/ProfileList 1.38
284 TestNoKubernetes/serial/Stop 1.25
288 TestNoKubernetes/serial/StartNoArgs 7.96
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
297 TestPreload/PreloadSrc/gcs 4.22
298 TestPreload/PreloadSrc/github 8
299 TestPreload/PreloadSrc/gcs-cached 0.41
300 TestNetworkPlugins/group/auto/Start 36.61
301 TestNetworkPlugins/group/auto/KubeletFlags 0.29
302 TestNetworkPlugins/group/auto/NetCatPod 8.21
303 TestNetworkPlugins/group/auto/DNS 0.1
304 TestNetworkPlugins/group/auto/Localhost 0.08
305 TestNetworkPlugins/group/auto/HairPin 0.08
306 TestNetworkPlugins/group/kindnet/Start 39.58
307 TestNetworkPlugins/group/calico/Start 46.74
308 TestNetworkPlugins/group/custom-flannel/Start 43.54
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
311 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
312 TestNetworkPlugins/group/kindnet/DNS 0.12
313 TestNetworkPlugins/group/kindnet/Localhost 0.09
314 TestNetworkPlugins/group/kindnet/HairPin 0.08
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.29
317 TestNetworkPlugins/group/calico/NetCatPod 8.21
318 TestNetworkPlugins/group/enable-default-cni/Start 59.2
319 TestNetworkPlugins/group/calico/DNS 0.12
320 TestNetworkPlugins/group/calico/Localhost 0.11
321 TestNetworkPlugins/group/calico/HairPin 0.1
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
324 TestNetworkPlugins/group/custom-flannel/DNS 0.14
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
327 TestNetworkPlugins/group/flannel/Start 41.86
328 TestNetworkPlugins/group/bridge/Start 65.41
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
331 TestNetworkPlugins/group/flannel/ControllerPod 6.01
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
335 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
336 TestNetworkPlugins/group/flannel/NetCatPod 8.16
337 TestNetworkPlugins/group/flannel/DNS 0.12
338 TestNetworkPlugins/group/flannel/Localhost 0.1
339 TestNetworkPlugins/group/flannel/HairPin 0.1
341 TestStartStop/group/old-k8s-version/serial/FirstStart 50.96
343 TestStartStop/group/no-preload/serial/FirstStart 50.43
345 TestStartStop/group/embed-certs/serial/FirstStart 41.89
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
347 TestNetworkPlugins/group/bridge/NetCatPod 11.24
348 TestNetworkPlugins/group/bridge/DNS 0.11
349 TestNetworkPlugins/group/bridge/Localhost 0.11
350 TestNetworkPlugins/group/bridge/HairPin 0.1
352 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.67
353 TestStartStop/group/old-k8s-version/serial/DeployApp 7.42
354 TestStartStop/group/embed-certs/serial/DeployApp 8.24
355 TestStartStop/group/no-preload/serial/DeployApp 8.23
357 TestStartStop/group/old-k8s-version/serial/Stop 16.07
359 TestStartStop/group/embed-certs/serial/Stop 18.15
361 TestStartStop/group/no-preload/serial/Stop 18.19
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
363 TestStartStop/group/old-k8s-version/serial/SecondStart 48.21
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
365 TestStartStop/group/embed-certs/serial/SecondStart 48.76
366 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.25
367 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
368 TestStartStop/group/no-preload/serial/SecondStart 50.15
370 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.75
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 43.51
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
376 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
379 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/newest-cni/serial/FirstStart 24.77
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
387 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
388 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
391 TestStartStop/group/newest-cni/serial/DeployApp 0
393 TestStartStop/group/newest-cni/serial/Stop 7.96
394 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
395 TestStartStop/group/newest-cni/serial/SecondStart 11.22
396 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (5.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-113425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-113425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.049254236s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0110 01:53:17.829844   14086 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0110 01:53:17.829947   14086 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-113425
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-113425: exit status 85 (66.418901ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-113425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-113425 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 01:53:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 01:53:12.832608   14098 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:53:12.832838   14098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:12.832850   14098 out.go:374] Setting ErrFile to fd 2...
	I0110 01:53:12.832855   14098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:12.833049   14098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	W0110 01:53:12.833167   14098 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22414-10552/.minikube/config/config.json: open /home/jenkins/minikube-integration/22414-10552/.minikube/config/config.json: no such file or directory
	I0110 01:53:12.833640   14098 out.go:368] Setting JSON to true
	I0110 01:53:12.834522   14098 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2142,"bootTime":1768007851,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 01:53:12.834575   14098 start.go:143] virtualization: kvm guest
	I0110 01:53:12.839085   14098 out.go:99] [download-only-113425] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0110 01:53:12.839277   14098 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball: no such file or directory
	I0110 01:53:12.839282   14098 notify.go:221] Checking for updates...
	I0110 01:53:12.840350   14098 out.go:171] MINIKUBE_LOCATION=22414
	I0110 01:53:12.841578   14098 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:53:12.842790   14098 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 01:53:12.843746   14098 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 01:53:12.845199   14098 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0110 01:53:12.847258   14098 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 01:53:12.847444   14098 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:53:12.870213   14098 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 01:53:12.870276   14098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:13.090141   14098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2026-01-10 01:53:13.07836746 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 01:53:13.090238   14098 docker.go:319] overlay module found
	I0110 01:53:13.091483   14098 out.go:99] Using the docker driver based on user configuration
	I0110 01:53:13.091528   14098 start.go:309] selected driver: docker
	I0110 01:53:13.091537   14098 start.go:928] validating driver "docker" against <nil>
	I0110 01:53:13.091608   14098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:13.145507   14098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2026-01-10 01:53:13.136755346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 01:53:13.145698   14098 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 01:53:13.146347   14098 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0110 01:53:13.146679   14098 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 01:53:13.148373   14098 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-113425 host does not exist
	  To start a cluster, run: "minikube start -p download-only-113425"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-113425
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-756817 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-756817 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.663042651s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0110 01:53:21.910121   14086 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I0110 01:53:21.910160   14086 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-756817
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-756817: exit status 85 (67.018956ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-113425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-113425 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-113425                                                                                                                                                   │ download-only-113425 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ -o=json --download-only -p download-only-756817 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-756817 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 01:53:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 01:53:18.294030   14457 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:53:18.294274   14457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:18.294284   14457 out.go:374] Setting ErrFile to fd 2...
	I0110 01:53:18.294288   14457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:18.294473   14457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:53:18.294924   14457 out.go:368] Setting JSON to true
	I0110 01:53:18.295637   14457 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2147,"bootTime":1768007851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 01:53:18.295685   14457 start.go:143] virtualization: kvm guest
	I0110 01:53:18.297193   14457 out.go:99] [download-only-756817] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 01:53:18.297368   14457 notify.go:221] Checking for updates...
	I0110 01:53:18.298455   14457 out.go:171] MINIKUBE_LOCATION=22414
	I0110 01:53:18.299621   14457 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:53:18.300701   14457 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 01:53:18.301721   14457 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 01:53:18.302636   14457 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0110 01:53:18.304419   14457 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 01:53:18.304615   14457 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:53:18.328953   14457 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 01:53:18.329043   14457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:18.379432   14457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2026-01-10 01:53:18.370531941 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 01:53:18.379524   14457 docker.go:319] overlay module found
	I0110 01:53:18.380874   14457 out.go:99] Using the docker driver based on user configuration
	I0110 01:53:18.380912   14457 start.go:309] selected driver: docker
	I0110 01:53:18.380920   14457 start.go:928] validating driver "docker" against <nil>
	I0110 01:53:18.381013   14457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:18.434486   14457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2026-01-10 01:53:18.425341502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 01:53:18.434631   14457 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 01:53:18.435127   14457 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0110 01:53:18.435255   14457 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 01:53:18.436975   14457 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-756817 host does not exist
	  To start a cluster, run: "minikube start -p download-only-756817"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-756817
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-310469 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-310469" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-310469
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I0110 01:53:22.968332   14086 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-864544 --alsologtostderr --binary-mirror http://127.0.0.1:41199 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-864544" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-864544
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (58.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-092866 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-092866 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (56.411467664s)
helpers_test.go:176: Cleaning up "offline-crio-092866" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-092866
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-092866: (2.451961693s)
--- PASS: TestOffline (58.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-600454
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-600454: exit status 85 (61.778277ms)

                                                
                                                
-- stdout --
	* Profile "addons-600454" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-600454"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-600454
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-600454: exit status 85 (60.643668ms)

                                                
                                                
-- stdout --
	* Profile "addons-600454" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-600454"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (90.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-600454 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-600454 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m30.887899703s)
--- PASS: TestAddons/Setup (90.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-600454 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-600454 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-600454 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-600454 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [836d4afe-3866-4809-a263-0952a1995284] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [836d4afe-3866-4809-a263-0952a1995284] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.002993339s
addons_test.go:696: (dbg) Run:  kubectl --context addons-600454 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-600454 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-600454 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-600454
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-600454: (18.247279691s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-600454
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-600454
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-600454
--- PASS: TestAddons/StoppedEnableDisable (18.52s)

                                                
                                    
x
+
TestCertOptions (25.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-553703 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-553703 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.032303069s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-553703 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-553703 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-553703 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-553703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-553703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-553703: (2.414238831s)
--- PASS: TestCertOptions (25.09s)

                                                
                                    
x
+
TestCertExpiration (209.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-738098 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-738098 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.926028662s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-738098 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-738098 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.476765134s)
helpers_test.go:176: Cleaning up "cert-expiration-738098" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-738098
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-738098: (2.501281199s)
--- PASS: TestCertExpiration (209.91s)

                                                
                                    
x
+
TestForceSystemdFlag (25.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-825404 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0110 02:18:32.101066   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-825404 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.043284303s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-825404 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-825404" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-825404
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-825404: (2.5459082s)
--- PASS: TestForceSystemdFlag (25.89s)

                                                
                                    
x
+
TestForceSystemdEnv (24.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-135135 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-135135 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.860648447s)
helpers_test.go:176: Cleaning up "force-systemd-env-135135" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-135135
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-135135: (2.419496609s)
--- PASS: TestForceSystemdEnv (24.28s)

                                                
                                    
x
+
TestErrorSpam/setup (15.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-933605 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-933605 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-933605 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-933605 --driver=docker  --container-runtime=crio: (15.755028432s)
--- PASS: TestErrorSpam/setup (15.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (6.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause: exit status 80 (2.118442297s)

                                                
                                                
-- stdout --
	* Pausing node nospam-933605 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause: exit status 80 (2.372683003s)

                                                
                                                
-- stdout --
	* Pausing node nospam-933605 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause: exit status 80 (2.351883312s)

                                                
                                                
-- stdout --
	* Pausing node nospam-933605 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause: exit status 80 (2.129003141s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-933605 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause: exit status 80 (1.57766761s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-933605 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause: exit status 80 (1.793043733s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-933605 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.50s)

                                                
                                    
x
+
TestErrorSpam/stop (2.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 stop: (2.403559078s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933605 --log_dir /tmp/nospam-933605 stop
--- PASS: TestErrorSpam/stop (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22414-10552/.minikube/files/etc/test/nested/copy/14086/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224091 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-224091 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (36.23499495s)
--- PASS: TestFunctional/serial/StartWithProxy (36.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0110 01:57:26.140082   14086 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224091 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-224091 --alsologtostderr -v=8: (5.848261419s)
functional_test.go:678: soft start took 5.848979137s for "functional-224091" cluster.
I0110 01:57:31.988794   14086 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (5.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-224091 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-224091 /tmp/TestFunctionalserialCacheCmdcacheadd_local3405902961/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cache add minikube-local-cache-test:functional-224091
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cache delete minikube-local-cache-test:functional-224091
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-224091
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.683073ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 kubectl -- --context functional-224091 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-224091 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224091 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-224091 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.372499699s)
functional_test.go:776: restart took 47.3726524s for "functional-224091" cluster.
I0110 01:58:25.475789   14086 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (47.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-224091 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-224091 logs: (1.147123529s)
--- PASS: TestFunctional/serial/LogsCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 logs --file /tmp/TestFunctionalserialLogsFileCmd2173099182/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-224091 logs --file /tmp/TestFunctionalserialLogsFileCmd2173099182/001/logs.txt: (1.147671384s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-224091 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-224091
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-224091: exit status 115 (330.535184ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31133 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-224091 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 config get cpus: exit status 14 (72.787247ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 config get cpus: exit status 14 (63.828026ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-224091 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-224091 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 49959: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224091 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-224091 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (174.557671ms)

                                                
                                                
-- stdout --
	* [functional-224091] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:58:43.531513   49286 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:58:43.531629   49286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:58:43.531639   49286 out.go:374] Setting ErrFile to fd 2...
	I0110 01:58:43.531644   49286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:58:43.531936   49286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:58:43.532494   49286 out.go:368] Setting JSON to false
	I0110 01:58:43.533752   49286 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2472,"bootTime":1768007851,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 01:58:43.533822   49286 start.go:143] virtualization: kvm guest
	I0110 01:58:43.536572   49286 out.go:179] * [functional-224091] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 01:58:43.537740   49286 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 01:58:43.537756   49286 notify.go:221] Checking for updates...
	I0110 01:58:43.540039   49286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:58:43.541174   49286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 01:58:43.542562   49286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 01:58:43.543692   49286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 01:58:43.544979   49286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 01:58:43.546736   49286 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:58:43.547337   49286 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:58:43.585484   49286 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 01:58:43.585619   49286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:58:43.639664   49286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2026-01-10 01:58:43.630471519 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 01:58:43.639838   49286 docker.go:319] overlay module found
	I0110 01:58:43.641590   49286 out.go:179] * Using the docker driver based on existing profile
	I0110 01:58:43.642670   49286 start.go:309] selected driver: docker
	I0110 01:58:43.642681   49286 start.go:928] validating driver "docker" against &{Name:functional-224091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-224091 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 01:58:43.642746   49286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 01:58:43.644326   49286 out.go:203] 
	W0110 01:58:43.645417   49286 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0110 01:58:43.646434   49286 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224091 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224091 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-224091 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.45358ms)

                                                
                                                
-- stdout --
	* [functional-224091] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:58:43.919206   49523 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:58:43.919311   49523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:58:43.919317   49523 out.go:374] Setting ErrFile to fd 2...
	I0110 01:58:43.919323   49523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:58:43.919573   49523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 01:58:43.919994   49523 out.go:368] Setting JSON to false
	I0110 01:58:43.920844   49523 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2473,"bootTime":1768007851,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 01:58:43.920922   49523 start.go:143] virtualization: kvm guest
	I0110 01:58:43.922593   49523 out.go:179] * [functional-224091] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0110 01:58:43.924085   49523 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 01:58:43.924079   49523 notify.go:221] Checking for updates...
	I0110 01:58:43.925356   49523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:58:43.926480   49523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 01:58:43.927632   49523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 01:58:43.928701   49523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 01:58:43.929749   49523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 01:58:43.931361   49523 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:58:43.932098   49523 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:58:43.955711   49523 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 01:58:43.955811   49523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:58:44.015741   49523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2026-01-10 01:58:44.004602127 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 01:58:44.015869   49523 docker.go:319] overlay module found
	I0110 01:58:44.017365   49523 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0110 01:58:44.018351   49523 start.go:309] selected driver: docker
	I0110 01:58:44.018363   49523 start.go:928] validating driver "docker" against &{Name:functional-224091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-224091 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 01:58:44.018450   49523 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 01:58:44.019941   49523 out.go:203] 
	W0110 01:58:44.020844   49523 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0110 01:58:44.021819   49523 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-224091 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-224091 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-75njp" [23b32c9a-8f99-456b-a23e-17094203b7dd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-75njp" [23b32c9a-8f99-456b-a23e-17094203b7dd] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.003967914s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31965
functional_test.go:1685: http://192.168.49.2:31965: success! body:
Request served by hello-node-connect-5d95464fd4-75njp

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31965
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [ba5768ed-1b45-4b30-afe3-6382f6969666] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003035259s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-224091 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-224091 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-224091 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-224091 apply -f testdata/storage-provisioner/pod.yaml
I0110 01:58:39.353477   14086 detect.go:211] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7937e45e-8629-46cd-a72c-43a75c8aac11] Pending
helpers_test.go:353: "sp-pod" [7937e45e-8629-46cd-a72c-43a75c8aac11] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003806321s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-224091 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-224091 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-224091 apply -f testdata/storage-provisioner/pod.yaml
I0110 01:58:47.101985   14086 detect.go:211] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e5cf678f-60ba-4c3b-ae9d-48819aff1a00] Pending
helpers_test.go:353: "sp-pod" [e5cf678f-60ba-4c3b-ae9d-48819aff1a00] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003642484s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-224091 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh -n functional-224091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cp functional-224091:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2584037217/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh -n functional-224091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh -n functional-224091 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-224091 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-rb795" [3a4beba5-4f72-4f81-adf0-6addf053456c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-rb795" [3a4beba5-4f72-4f81-adf0-6addf053456c] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.00304244s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;": exit status 1 (119.027337ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0110 01:59:01.743124   14086 retry.go:84] will retry after 900ms: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;": exit status 1 (118.139526ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;": exit status 1 (80.360249ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;": exit status 1 (82.077591ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2026/01/10 01:59:07 [DEBUG] GET http://127.0.0.1:36633/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1817: (dbg) Run:  kubectl --context functional-224091 exec mysql-7d7b65bc95-rb795 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/14086/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo cat /etc/test/nested/copy/14086/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/14086.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo cat /etc/ssl/certs/14086.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/14086.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo cat /usr/share/ca-certificates/14086.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/140862.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo cat /etc/ssl/certs/140862.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/140862.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo cat /usr/share/ca-certificates/140862.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-224091 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 ssh "sudo systemctl is-active docker": exit status 1 (290.77799ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 ssh "sudo systemctl is-active containerd": exit status 1 (272.739704ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-224091 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-224091 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-sv7vd" [1b8edf97-aeae-4623-b461-05c746df0f12] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-sv7vd" [1b8edf97-aeae-4623-b461-05c746df0f12] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004429245s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224091 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-224091
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224091 image ls --format short --alsologtostderr:
I0110 01:58:54.826516   52296 out.go:360] Setting OutFile to fd 1 ...
I0110 01:58:54.826642   52296 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:54.826651   52296 out.go:374] Setting ErrFile to fd 2...
I0110 01:58:54.826656   52296 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:54.826823   52296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
I0110 01:58:54.827403   52296 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:54.827489   52296 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:54.827939   52296 cli_runner.go:164] Run: docker container inspect functional-224091 --format={{.State.Status}}
I0110 01:58:54.846147   52296 ssh_runner.go:195] Run: systemctl --version
I0110 01:58:54.846195   52296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224091
I0110 01:58:54.864332   52296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/functional-224091/id_rsa Username:docker}
I0110 01:58:54.957159   52296 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224091 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ b9d44994d8add │ 63.3MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ 32652ff1bbe6b │ 72MB   │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ localhost/my-image                                │ functional-224091                     │ c4f25e3c36983 │ 1.47MB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 54c6e074ef93c │ 804MB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox                       │ latest                                │ beae173ccac6a │ 1.46MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-224091                     │ 9056ab77afb8e │ 4.95MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ 5c6acd67e9cd1 │ 90.8MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ 550794e3b12ac │ 52.8MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test               │ functional-224091                     │ 7bb885e20673b │ 3.33kB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 2c9a4b058bd7e │ 76.9MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224091 image ls --format table --alsologtostderr:
I0110 01:58:58.797823   53148 out.go:360] Setting OutFile to fd 1 ...
I0110 01:58:58.798088   53148 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:58.798098   53148 out.go:374] Setting ErrFile to fd 2...
I0110 01:58:58.798102   53148 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:58.798300   53148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
I0110 01:58:58.798816   53148 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:58.798938   53148 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:58.799571   53148 cli_runner.go:164] Run: docker container inspect functional-224091 --format={{.State.Status}}
I0110 01:58:58.820950   53148 ssh_runner.go:195] Run: systemctl --version
I0110 01:58:58.821021   53148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224091
I0110 01:58:58.842642   53148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/functional-224091/id_rsa Username:docker}
I0110 01:58:58.950850   53148 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224091 image ls --format json --alsologtostderr:
[{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"52763986"},{"id":"47e05950404cb2f9b8df63bb1aee1b3ad383d6ca29eb2e1deb4b72845c3828a3","repoDigests":["docker.io/library/e7772125a2e041350e1860ea7bb7929361b3e43c2a8152e541d94c39f6177a37-tmp@sha256:d2fe76ec5f51e43bbb83e8c23469301fc72156f3a0ad96d771e21c823317c828"],"repoTags":[],"size":"1466132"},{"id":"7bb885e20673bca0cbce4a42fa00f23f9d20607c90d4a1f33bc59201416b0ce9","repoDigests":["localhost/minikube-local-cache-test@sha256:85ca1bc267b1f005a7fd8f0bff7a87ed10f60326568e7690c4a391d83153797d"],"repoTags":["localhost/minikube-local-cache-test:functional-224091"],"size":"3330"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repo
Digests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a0
8fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"71986585"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"54c6e074ef93c709bfd8e76a38f54a65e9b5a38d25c9cf82e2633a21f89cd009","repoDiges
ts":["public.ecr.aws/docker/library/mysql@sha256:615302383ec847282233669b4c18396aa075b1279ff7729af0dcd99784361659","public.ecr.aws/docker/library/mysql@sha256:90544b3775490579867a30988d48f0215fc3b88d78d8d62b2c0d96ee9226a2b7"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803768460"},{"id":"b9d44994d8adde234cc849b6518ae39e786c40b1a7c9cc1de674fb3e7f913fc2","repoDigests":["public.ecr.aws/nginx/nginx@sha256:92e3aff70715f47c5c05580bbe7ed66cb0625814e71b8885ccdbb6d89496f87f","public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"63312028"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111","registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62"],"repoTags":["registry.k8s.io/kube-controller-ma
nager:v1.35.0"],"size":"76893520"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa
64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899a
e1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4945146"},{"id":"c4f25e3c36983dc44167a296292ca3892cb34222abc6767f4b833b2e458096cb","repoDigests":["localhost/my-image@sha256:3fe6107a1e9273ed1c556eb02f932293e23025bc04756f1594ebb53ed5d4e7d8"],"repoTags":["localhost/my-image:functional-224091"],"size":"1468744"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.i
o/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"90844140"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224091 image ls --format json --alsologtostderr:
I0110 01:58:58.577877   53087 out.go:360] Setting OutFile to fd 1 ...
I0110 01:58:58.578123   53087 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:58.578132   53087 out.go:374] Setting ErrFile to fd 2...
I0110 01:58:58.578136   53087 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:58.578314   53087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
I0110 01:58:58.579122   53087 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:58.579248   53087 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:58.579684   53087 cli_runner.go:164] Run: docker container inspect functional-224091 --format={{.State.Status}}
I0110 01:58:58.597484   53087 ssh_runner.go:195] Run: systemctl --version
I0110 01:58:58.597540   53087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224091
I0110 01:58:58.614094   53087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/functional-224091/id_rsa Username:docker}
I0110 01:58:58.705752   53087 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224091 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4945146"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "90844140"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 54c6e074ef93c709bfd8e76a38f54a65e9b5a38d25c9cf82e2633a21f89cd009
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:615302383ec847282233669b4c18396aa075b1279ff7729af0dcd99784361659
- public.ecr.aws/docker/library/mysql@sha256:90544b3775490579867a30988d48f0215fc3b88d78d8d62b2c0d96ee9226a2b7
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803768460"
- id: b9d44994d8adde234cc849b6518ae39e786c40b1a7c9cc1de674fb3e7f913fc2
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:92e3aff70715f47c5c05580bbe7ed66cb0625814e71b8885ccdbb6d89496f87f
- public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "63312028"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 7bb885e20673bca0cbce4a42fa00f23f9d20607c90d4a1f33bc59201416b0ce9
repoDigests:
- localhost/minikube-local-cache-test@sha256:85ca1bc267b1f005a7fd8f0bff7a87ed10f60326568e7690c4a391d83153797d
repoTags:
- localhost/minikube-local-cache-test:functional-224091
size: "3330"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "71986585"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "52763986"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
- registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "76893520"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224091 image ls --format yaml --alsologtostderr:
I0110 01:58:55.053479   52366 out.go:360] Setting OutFile to fd 1 ...
I0110 01:58:55.053569   52366 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:55.053577   52366 out.go:374] Setting ErrFile to fd 2...
I0110 01:58:55.053581   52366 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:55.053762   52366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
I0110 01:58:55.054363   52366 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:55.054448   52366 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:55.054821   52366 cli_runner.go:164] Run: docker container inspect functional-224091 --format={{.State.Status}}
I0110 01:58:55.072539   52366 ssh_runner.go:195] Run: systemctl --version
I0110 01:58:55.072579   52366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224091
I0110 01:58:55.090458   52366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/functional-224091/id_rsa Username:docker}
I0110 01:58:55.182239   52366 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 ssh pgrep buildkitd: exit status 1 (288.531952ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image build -t localhost/my-image:functional-224091 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-224091 image build -t localhost/my-image:functional-224091 testdata/build --alsologtostderr: (2.800495427s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224091 image build -t localhost/my-image:functional-224091 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 47e05950404
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-224091
--> c4f25e3c369
Successfully tagged localhost/my-image:functional-224091
c4f25e3c36983dc44167a296292ca3892cb34222abc6767f4b833b2e458096cb
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224091 image build -t localhost/my-image:functional-224091 testdata/build --alsologtostderr:
I0110 01:58:55.571438   52586 out.go:360] Setting OutFile to fd 1 ...
I0110 01:58:55.571594   52586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:55.571604   52586 out.go:374] Setting ErrFile to fd 2...
I0110 01:58:55.571610   52586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 01:58:55.571863   52586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
I0110 01:58:55.572690   52586 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:55.573532   52586 config.go:182] Loaded profile config "functional-224091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 01:58:55.574182   52586 cli_runner.go:164] Run: docker container inspect functional-224091 --format={{.State.Status}}
I0110 01:58:55.594314   52586 ssh_runner.go:195] Run: systemctl --version
I0110 01:58:55.594365   52586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224091
I0110 01:58:55.612993   52586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/functional-224091/id_rsa Username:docker}
I0110 01:58:55.707235   52586 build_images.go:162] Building image from path: /tmp/build.4225319375.tar
I0110 01:58:55.707299   52586 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0110 01:58:55.715574   52586 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4225319375.tar
I0110 01:58:55.720026   52586 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4225319375.tar: stat -c "%s %y" /var/lib/minikube/build/build.4225319375.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4225319375.tar': No such file or directory
I0110 01:58:55.720054   52586 ssh_runner.go:362] scp /tmp/build.4225319375.tar --> /var/lib/minikube/build/build.4225319375.tar (3072 bytes)
I0110 01:58:55.741852   52586 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4225319375
I0110 01:58:55.749410   52586 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4225319375 -xf /var/lib/minikube/build/build.4225319375.tar
I0110 01:58:55.758579   52586 crio.go:315] Building image: /var/lib/minikube/build/build.4225319375
I0110 01:58:55.758641   52586 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-224091 /var/lib/minikube/build/build.4225319375 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0110 01:58:58.279548   52586 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-224091 /var/lib/minikube/build/build.4225319375 --cgroup-manager=cgroupfs: (2.520879709s)
I0110 01:58:58.279612   52586 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4225319375
I0110 01:58:58.288721   52586 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4225319375.tar
I0110 01:58:58.297350   52586 build_images.go:218] Built localhost/my-image:functional-224091 from /tmp/build.4225319375.tar
I0110 01:58:58.297387   52586 build_images.go:134] succeeded building to: functional-224091
I0110 01:58:58.297394   52586 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224091 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224091 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-224091 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 45250: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-224091 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224091 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-224091 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [40f857fe-428a-4344-a948-f697ecefb916] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [40f857fe-428a-4344-a948-f697ecefb916] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00414501s
I0110 01:58:43.788617   14086 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-224091 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091 --alsologtostderr: (1.25290812s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "343.523396ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "63.992422ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 service list -o json
functional_test.go:1509: Took "330.457719ms" to run "out/minikube-linux-amd64 -p functional-224091 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "348.946055ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "56.584239ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:32653
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdany-port1483797109/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768010320998699329" to /tmp/TestFunctionalparallelMountCmdany-port1483797109/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768010320998699329" to /tmp/TestFunctionalparallelMountCmdany-port1483797109/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768010320998699329" to /tmp/TestFunctionalparallelMountCmdany-port1483797109/001/test-1768010320998699329
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.47846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 01:58:41.279468   14086 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 10 01:58 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 10 01:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 10 01:58 test-1768010320998699329
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh cat /mount-9p/test-1768010320998699329
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-224091 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [a21cbeac-4f09-408c-a8f7-cffdddcccebc] Pending
helpers_test.go:353: "busybox-mount" [a21cbeac-4f09-408c-a8f7-cffdddcccebc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [a21cbeac-4f09-408c-a8f7-cffdddcccebc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [a21cbeac-4f09-408c-a8f7-cffdddcccebc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002603552s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-224091 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdany-port1483797109/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:32653
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-224091 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.54.42 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-224091 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdspecific-port4140086077/001:/mount-9p --alsologtostderr -v=1 --port 42315]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.588484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 01:58:46.775266   14086 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdspecific-port4140086077/001:/mount-9p --alsologtostderr -v=1 --port 42315] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 ssh "sudo umount -f /mount-9p": exit status 1 (362.232404ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-224091 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdspecific-port4140086077/001:/mount-9p --alsologtostderr -v=1 --port 42315] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3535673685/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3535673685/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3535673685/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T" /mount1: exit status 1 (474.888767ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-224091 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-224091 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3535673685/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3535673685/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224091 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3535673685/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-224091
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-224091
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-224091
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (104.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0110 01:59:55.577559   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:55.582842   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:55.593098   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:55.613353   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:55.653594   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:55.733881   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:55.894276   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:56.214812   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:56.855669   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 01:59:58.136480   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:00:00.698039   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:00:05.818415   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:00:16.059360   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:00:36.540021   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m43.615495108s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (104.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 kubectl -- rollout status deployment/busybox: (1.909399502s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-g24rk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-mhbvr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-zdkw2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-g24rk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-mhbvr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-zdkw2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-g24rk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-mhbvr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-zdkw2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-g24rk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-g24rk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-mhbvr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-mhbvr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-zdkw2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 kubectl -- exec busybox-769dd8b7dd-zdkw2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 node add --alsologtostderr -v 5
E0110 02:01:17.500548   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 node add --alsologtostderr -v 5: (26.059701962s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-223844 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp testdata/cp-test.txt ha-223844:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile268409533/001/cp-test_ha-223844.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844:/home/docker/cp-test.txt ha-223844-m02:/home/docker/cp-test_ha-223844_ha-223844-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m02 "sudo cat /home/docker/cp-test_ha-223844_ha-223844-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844:/home/docker/cp-test.txt ha-223844-m03:/home/docker/cp-test_ha-223844_ha-223844-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m03 "sudo cat /home/docker/cp-test_ha-223844_ha-223844-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844:/home/docker/cp-test.txt ha-223844-m04:/home/docker/cp-test_ha-223844_ha-223844-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m04 "sudo cat /home/docker/cp-test_ha-223844_ha-223844-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp testdata/cp-test.txt ha-223844-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile268409533/001/cp-test_ha-223844-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m02:/home/docker/cp-test.txt ha-223844:/home/docker/cp-test_ha-223844-m02_ha-223844.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844 "sudo cat /home/docker/cp-test_ha-223844-m02_ha-223844.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m02:/home/docker/cp-test.txt ha-223844-m03:/home/docker/cp-test_ha-223844-m02_ha-223844-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m03 "sudo cat /home/docker/cp-test_ha-223844-m02_ha-223844-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m02:/home/docker/cp-test.txt ha-223844-m04:/home/docker/cp-test_ha-223844-m02_ha-223844-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m04 "sudo cat /home/docker/cp-test_ha-223844-m02_ha-223844-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp testdata/cp-test.txt ha-223844-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile268409533/001/cp-test_ha-223844-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m03:/home/docker/cp-test.txt ha-223844:/home/docker/cp-test_ha-223844-m03_ha-223844.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844 "sudo cat /home/docker/cp-test_ha-223844-m03_ha-223844.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m03:/home/docker/cp-test.txt ha-223844-m02:/home/docker/cp-test_ha-223844-m03_ha-223844-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m02 "sudo cat /home/docker/cp-test_ha-223844-m03_ha-223844-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m03:/home/docker/cp-test.txt ha-223844-m04:/home/docker/cp-test_ha-223844-m03_ha-223844-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m04 "sudo cat /home/docker/cp-test_ha-223844-m03_ha-223844-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp testdata/cp-test.txt ha-223844-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile268409533/001/cp-test_ha-223844-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m04:/home/docker/cp-test.txt ha-223844:/home/docker/cp-test_ha-223844-m04_ha-223844.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844 "sudo cat /home/docker/cp-test_ha-223844-m04_ha-223844.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m04:/home/docker/cp-test.txt ha-223844-m02:/home/docker/cp-test_ha-223844-m04_ha-223844-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m02 "sudo cat /home/docker/cp-test_ha-223844-m04_ha-223844-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 cp ha-223844-m04:/home/docker/cp-test.txt ha-223844-m03:/home/docker/cp-test_ha-223844-m04_ha-223844-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 ssh -n ha-223844-m03 "sudo cat /home/docker/cp-test_ha-223844-m04_ha-223844-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 node stop m02 --alsologtostderr -v 5: (12.563835567s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5: exit status 7 (683.097729ms)

                                                
                                                
-- stdout --
	ha-223844
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-223844-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-223844-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-223844-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:01:58.691988   73424 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:01:58.692089   73424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:01:58.692097   73424 out.go:374] Setting ErrFile to fd 2...
	I0110 02:01:58.692101   73424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:01:58.692273   73424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:01:58.692431   73424 out.go:368] Setting JSON to false
	I0110 02:01:58.692454   73424 mustload.go:66] Loading cluster: ha-223844
	I0110 02:01:58.692500   73424 notify.go:221] Checking for updates...
	I0110 02:01:58.692829   73424 config.go:182] Loaded profile config "ha-223844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:01:58.692844   73424 status.go:174] checking status of ha-223844 ...
	I0110 02:01:58.693322   73424 cli_runner.go:164] Run: docker container inspect ha-223844 --format={{.State.Status}}
	I0110 02:01:58.713159   73424 status.go:371] ha-223844 host status = "Running" (err=<nil>)
	I0110 02:01:58.713181   73424 host.go:66] Checking if "ha-223844" exists ...
	I0110 02:01:58.713453   73424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-223844
	I0110 02:01:58.731042   73424 host.go:66] Checking if "ha-223844" exists ...
	I0110 02:01:58.731264   73424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:01:58.731319   73424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-223844
	I0110 02:01:58.748065   73424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32785 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/ha-223844/id_rsa Username:docker}
	I0110 02:01:58.839136   73424 ssh_runner.go:195] Run: systemctl --version
	I0110 02:01:58.845389   73424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:01:58.857599   73424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:01:58.916555   73424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 02:01:58.906756619 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:01:58.917100   73424 kubeconfig.go:125] found "ha-223844" server: "https://192.168.49.254:8443"
	I0110 02:01:58.917140   73424 api_server.go:166] Checking apiserver status ...
	I0110 02:01:58.917178   73424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:01:58.929263   73424 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup
	I0110 02:01:58.937346   73424 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1240/cgroup
	I0110 02:01:58.945163   73424 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-a32bd4ec29e0a7665d46de80e7eb72c925bbd8bbed104d01fc929b36a61e3fcd.scope/container/cgroup.freeze
	I0110 02:01:58.952158   73424 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 02:01:58.957656   73424 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 02:01:58.957679   73424 status.go:463] ha-223844 apiserver status = Running (err=<nil>)
	I0110 02:01:58.957695   73424 status.go:176] ha-223844 status: &{Name:ha-223844 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:01:58.957714   73424 status.go:174] checking status of ha-223844-m02 ...
	I0110 02:01:58.958043   73424 cli_runner.go:164] Run: docker container inspect ha-223844-m02 --format={{.State.Status}}
	I0110 02:01:58.975659   73424 status.go:371] ha-223844-m02 host status = "Stopped" (err=<nil>)
	I0110 02:01:58.975677   73424 status.go:384] host is not running, skipping remaining checks
	I0110 02:01:58.975682   73424 status.go:176] ha-223844-m02 status: &{Name:ha-223844-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:01:58.975704   73424 status.go:174] checking status of ha-223844-m03 ...
	I0110 02:01:58.975979   73424 cli_runner.go:164] Run: docker container inspect ha-223844-m03 --format={{.State.Status}}
	I0110 02:01:58.994115   73424 status.go:371] ha-223844-m03 host status = "Running" (err=<nil>)
	I0110 02:01:58.994133   73424 host.go:66] Checking if "ha-223844-m03" exists ...
	I0110 02:01:58.994351   73424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-223844-m03
	I0110 02:01:59.011698   73424 host.go:66] Checking if "ha-223844-m03" exists ...
	I0110 02:01:59.011942   73424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:01:59.011975   73424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-223844-m03
	I0110 02:01:59.028004   73424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32795 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/ha-223844-m03/id_rsa Username:docker}
	I0110 02:01:59.116830   73424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:01:59.128793   73424 kubeconfig.go:125] found "ha-223844" server: "https://192.168.49.254:8443"
	I0110 02:01:59.128816   73424 api_server.go:166] Checking apiserver status ...
	I0110 02:01:59.128843   73424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:01:59.138933   73424 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup
	I0110 02:01:59.147099   73424 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1164/cgroup
	I0110 02:01:59.154765   73424 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-e9586f73a401ce05c2e30b7ca9c0aeb05801a54c3c324781a1086b30f1b778ba.scope/container/cgroup.freeze
	I0110 02:01:59.161613   73424 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 02:01:59.165448   73424 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 02:01:59.165471   73424 status.go:463] ha-223844-m03 apiserver status = Running (err=<nil>)
	I0110 02:01:59.165483   73424 status.go:176] ha-223844-m03 status: &{Name:ha-223844-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:01:59.165498   73424 status.go:174] checking status of ha-223844-m04 ...
	I0110 02:01:59.165775   73424 cli_runner.go:164] Run: docker container inspect ha-223844-m04 --format={{.State.Status}}
	I0110 02:01:59.183181   73424 status.go:371] ha-223844-m04 host status = "Running" (err=<nil>)
	I0110 02:01:59.183198   73424 host.go:66] Checking if "ha-223844-m04" exists ...
	I0110 02:01:59.183467   73424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-223844-m04
	I0110 02:01:59.200681   73424 host.go:66] Checking if "ha-223844-m04" exists ...
	I0110 02:01:59.200941   73424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:01:59.200975   73424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-223844-m04
	I0110 02:01:59.218095   73424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/ha-223844-m04/id_rsa Username:docker}
	I0110 02:01:59.307133   73424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:01:59.319059   73424 status.go:176] ha-223844-m04 status: &{Name:ha-223844-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 node start m02 --alsologtostderr -v 5: (7.369344965s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 stop --alsologtostderr -v 5
E0110 02:02:39.423174   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 stop --alsologtostderr -v 5: (49.005298038s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 start --wait true --alsologtostderr -v 5
E0110 02:03:32.102749   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:32.108008   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:32.118267   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:32.138531   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:32.178823   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:32.259139   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:32.419562   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:32.740105   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:33.380995   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:34.661880   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:37.223254   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:42.343407   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:52.583989   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 start --wait true --alsologtostderr -v 5: (56.971461398s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 node delete m03 --alsologtostderr -v 5: (10.272995541s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 stop --alsologtostderr -v 5
E0110 02:04:13.064480   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:04:54.025746   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:04:55.574570   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 stop --alsologtostderr -v 5: (48.88847563s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5: exit status 7 (110.950692ms)

                                                
                                                
-- stdout --
	ha-223844
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-223844-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-223844-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:04:56.011773   87672 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:04:56.012036   87672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:04:56.012046   87672 out.go:374] Setting ErrFile to fd 2...
	I0110 02:04:56.012050   87672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:04:56.012230   87672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:04:56.012403   87672 out.go:368] Setting JSON to false
	I0110 02:04:56.012430   87672 mustload.go:66] Loading cluster: ha-223844
	I0110 02:04:56.012496   87672 notify.go:221] Checking for updates...
	I0110 02:04:56.012961   87672 config.go:182] Loaded profile config "ha-223844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:04:56.012983   87672 status.go:174] checking status of ha-223844 ...
	I0110 02:04:56.013447   87672 cli_runner.go:164] Run: docker container inspect ha-223844 --format={{.State.Status}}
	I0110 02:04:56.032492   87672 status.go:371] ha-223844 host status = "Stopped" (err=<nil>)
	I0110 02:04:56.032512   87672 status.go:384] host is not running, skipping remaining checks
	I0110 02:04:56.032521   87672 status.go:176] ha-223844 status: &{Name:ha-223844 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:04:56.032563   87672 status.go:174] checking status of ha-223844-m02 ...
	I0110 02:04:56.032821   87672 cli_runner.go:164] Run: docker container inspect ha-223844-m02 --format={{.State.Status}}
	I0110 02:04:56.048770   87672 status.go:371] ha-223844-m02 host status = "Stopped" (err=<nil>)
	I0110 02:04:56.048812   87672 status.go:384] host is not running, skipping remaining checks
	I0110 02:04:56.048829   87672 status.go:176] ha-223844-m02 status: &{Name:ha-223844-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:04:56.048858   87672 status.go:174] checking status of ha-223844-m04 ...
	I0110 02:04:56.049179   87672 cli_runner.go:164] Run: docker container inspect ha-223844-m04 --format={{.State.Status}}
	I0110 02:04:56.066264   87672 status.go:371] ha-223844-m04 host status = "Stopped" (err=<nil>)
	I0110 02:04:56.066302   87672 status.go:384] host is not running, skipping remaining checks
	I0110 02:04:56.066321   87672 status.go:176] ha-223844-m04 status: &{Name:ha-223844-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (49.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0110 02:05:23.263365   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.886791163s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (32.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 node add --control-plane --alsologtostderr -v 5
E0110 02:06:15.946719   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-223844 node add --control-plane --alsologtostderr -v 5: (31.425064248s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-223844 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (32.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-319215 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-319215 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.93502647s)
--- PASS: TestJSONOutput/start/Command (38.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-319215 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-319215 --output=json --user=testUser: (7.974525567s)
--- PASS: TestJSONOutput/stop/Command (7.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-165812 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-165812 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.165932ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f7824a76-ceec-405a-b452-44e2b8370e16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-165812] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"42c1ce69-3e41-4c1e-894b-82edf434c3d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22414"}}
	{"specversion":"1.0","id":"f5236ebd-9d23-4160-8095-fca9cb8380f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"98a3f661-b16f-4efa-bd74-72f92dacc844","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig"}}
	{"specversion":"1.0","id":"06d6bcac-b73b-44a2-8e16-c2067b1c2f55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube"}}
	{"specversion":"1.0","id":"8ab30160-06a1-438a-a990-c3bd453f4f93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cf53076b-3999-47d4-af3e-1ddd046b47bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"791b2c0a-d9a4-4407-a3e8-4a308bfbe6cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-165812" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-165812
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-856381 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-856381 --network=: (20.96943245s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-856381" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-856381
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-856381: (2.066386059s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-288683 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-288683 --network=bridge: (20.580666495s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-288683" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-288683
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-288683: (1.961968482s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.56s)

                                                
                                    
x
+
TestKicExistingNetwork (20.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0110 02:08:14.452104   14086 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 02:08:14.468851   14086 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 02:08:14.468926   14086 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0110 02:08:14.468951   14086 cli_runner.go:164] Run: docker network inspect existing-network
W0110 02:08:14.485657   14086 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0110 02:08:14.485683   14086 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0110 02:08:14.485695   14086 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0110 02:08:14.485876   14086 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 02:08:14.502671   14086 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903d976062b9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:ca:09:29:f6:1b} reservation:<nil>}
I0110 02:08:14.502995   14086 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f9dac0}
I0110 02:08:14.503025   14086 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0110 02:08:14.503071   14086 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0110 02:08:14.547483   14086 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-505068 --network=existing-network
E0110 02:08:32.101222   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-505068 --network=existing-network: (17.985060209s)
helpers_test.go:176: Cleaning up "existing-network-505068" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-505068
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-505068: (1.963729653s)
I0110 02:08:34.513468   14086 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (20.08s)

                                                
                                    
x
+
TestKicCustomSubnet (23.53s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-593958 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-593958 --subnet=192.168.60.0/24: (21.397617213s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-593958 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-593958" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-593958
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-593958: (2.115861458s)
--- PASS: TestKicCustomSubnet (23.53s)

                                                
                                    
x
+
TestKicStaticIP (23.14s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-433036 --static-ip=192.168.200.200
E0110 02:08:59.788250   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-433036 --static-ip=192.168.200.200: (20.910558395s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-433036 ip
helpers_test.go:176: Cleaning up "static-ip-433036" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-433036
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-433036: (2.088707376s)
--- PASS: TestKicStaticIP (23.14s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (38.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-244754 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-244754 --driver=docker  --container-runtime=crio: (16.493493461s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-246650 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-246650 --driver=docker  --container-runtime=crio: (16.598867122s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-244754
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-246650
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
E0110 02:09:55.575152   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:176: Cleaning up "second-246650" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-246650
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-246650: (2.300783203s)
helpers_test.go:176: Cleaning up "first-244754" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-244754
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-244754: (2.275053874s)
--- PASS: TestMinikubeProfile (38.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-439023 --memory=3072 --mount-string /tmp/TestMountStartserial2204179380/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-439023 --memory=3072 --mount-string /tmp/TestMountStartserial2204179380/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.597443038s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-439023 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-453495 --memory=3072 --mount-string /tmp/TestMountStartserial2204179380/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-453495 --memory=3072 --mount-string /tmp/TestMountStartserial2204179380/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.558922561s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453495 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-439023 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-439023 --alsologtostderr -v=5: (1.652168032s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453495 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-453495
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-453495: (1.243805617s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-453495
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-453495: (6.280520242s)
--- PASS: TestMountStart/serial/RestartStopped (7.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453495 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-762808 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-762808 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.322657108s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (2.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-762808 -- rollout status deployment/busybox: (1.656249854s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-rjf4d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-z7xhv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-rjf4d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-z7xhv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-rjf4d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-z7xhv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (2.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-rjf4d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-rjf4d -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-z7xhv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-762808 -- exec busybox-769dd8b7dd-z7xhv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-762808 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-762808 -v=5 --alsologtostderr: (22.608971686s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-762808 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp testdata/cp-test.txt multinode-762808:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1952233926/001/cp-test_multinode-762808.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808:/home/docker/cp-test.txt multinode-762808-m02:/home/docker/cp-test_multinode-762808_multinode-762808-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m02 "sudo cat /home/docker/cp-test_multinode-762808_multinode-762808-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808:/home/docker/cp-test.txt multinode-762808-m03:/home/docker/cp-test_multinode-762808_multinode-762808-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m03 "sudo cat /home/docker/cp-test_multinode-762808_multinode-762808-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp testdata/cp-test.txt multinode-762808-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1952233926/001/cp-test_multinode-762808-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808-m02:/home/docker/cp-test.txt multinode-762808:/home/docker/cp-test_multinode-762808-m02_multinode-762808.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808 "sudo cat /home/docker/cp-test_multinode-762808-m02_multinode-762808.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808-m02:/home/docker/cp-test.txt multinode-762808-m03:/home/docker/cp-test_multinode-762808-m02_multinode-762808-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m03 "sudo cat /home/docker/cp-test_multinode-762808-m02_multinode-762808-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp testdata/cp-test.txt multinode-762808-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1952233926/001/cp-test_multinode-762808-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808-m03:/home/docker/cp-test.txt multinode-762808:/home/docker/cp-test_multinode-762808-m03_multinode-762808.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808 "sudo cat /home/docker/cp-test_multinode-762808-m03_multinode-762808.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 cp multinode-762808-m03:/home/docker/cp-test.txt multinode-762808-m02:/home/docker/cp-test_multinode-762808-m03_multinode-762808-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 ssh -n multinode-762808-m02 "sudo cat /home/docker/cp-test_multinode-762808-m03_multinode-762808-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-762808 node stop m03: (1.247369508s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-762808 status: exit status 7 (475.223386ms)

                                                
                                                
-- stdout --
	multinode-762808
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-762808-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-762808-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-762808 status --alsologtostderr: exit status 7 (477.22284ms)

                                                
                                                
-- stdout --
	multinode-762808
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-762808-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-762808-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:12:03.758800  147626 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:12:03.759065  147626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:12:03.759075  147626 out.go:374] Setting ErrFile to fd 2...
	I0110 02:12:03.759081  147626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:12:03.759307  147626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:12:03.759493  147626 out.go:368] Setting JSON to false
	I0110 02:12:03.759522  147626 mustload.go:66] Loading cluster: multinode-762808
	I0110 02:12:03.759617  147626 notify.go:221] Checking for updates...
	I0110 02:12:03.759895  147626 config.go:182] Loaded profile config "multinode-762808": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:12:03.759911  147626 status.go:174] checking status of multinode-762808 ...
	I0110 02:12:03.760358  147626 cli_runner.go:164] Run: docker container inspect multinode-762808 --format={{.State.Status}}
	I0110 02:12:03.779358  147626 status.go:371] multinode-762808 host status = "Running" (err=<nil>)
	I0110 02:12:03.779381  147626 host.go:66] Checking if "multinode-762808" exists ...
	I0110 02:12:03.779662  147626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-762808
	I0110 02:12:03.796642  147626 host.go:66] Checking if "multinode-762808" exists ...
	I0110 02:12:03.796907  147626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:12:03.796965  147626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-762808
	I0110 02:12:03.814184  147626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32905 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/multinode-762808/id_rsa Username:docker}
	I0110 02:12:03.903512  147626 ssh_runner.go:195] Run: systemctl --version
	I0110 02:12:03.909564  147626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:12:03.921275  147626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:12:03.975499  147626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2026-01-10 02:12:03.96530869 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:12:03.976040  147626 kubeconfig.go:125] found "multinode-762808" server: "https://192.168.67.2:8443"
	I0110 02:12:03.976081  147626 api_server.go:166] Checking apiserver status ...
	I0110 02:12:03.976128  147626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:12:03.987724  147626 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	I0110 02:12:03.995744  147626 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1230/cgroup
	I0110 02:12:04.002974  147626 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-c2813c49c521d76f6efe749d7cb975ea0fbb5a57136476cdb3c190dbf7691187.scope/container/cgroup.freeze
	I0110 02:12:04.009856  147626 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0110 02:12:04.013930  147626 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0110 02:12:04.013951  147626 status.go:463] multinode-762808 apiserver status = Running (err=<nil>)
	I0110 02:12:04.013964  147626 status.go:176] multinode-762808 status: &{Name:multinode-762808 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:12:04.013986  147626 status.go:174] checking status of multinode-762808-m02 ...
	I0110 02:12:04.014292  147626 cli_runner.go:164] Run: docker container inspect multinode-762808-m02 --format={{.State.Status}}
	I0110 02:12:04.031078  147626 status.go:371] multinode-762808-m02 host status = "Running" (err=<nil>)
	I0110 02:12:04.031097  147626 host.go:66] Checking if "multinode-762808-m02" exists ...
	I0110 02:12:04.031368  147626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-762808-m02
	I0110 02:12:04.047497  147626 host.go:66] Checking if "multinode-762808-m02" exists ...
	I0110 02:12:04.047763  147626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:12:04.047803  147626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-762808-m02
	I0110 02:12:04.063935  147626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/22414-10552/.minikube/machines/multinode-762808-m02/id_rsa Username:docker}
	I0110 02:12:04.152455  147626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:12:04.164051  147626 status.go:176] multinode-762808-m02 status: &{Name:multinode-762808-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:12:04.164085  147626 status.go:174] checking status of multinode-762808-m03 ...
	I0110 02:12:04.164355  147626 cli_runner.go:164] Run: docker container inspect multinode-762808-m03 --format={{.State.Status}}
	I0110 02:12:04.181134  147626 status.go:371] multinode-762808-m03 host status = "Stopped" (err=<nil>)
	I0110 02:12:04.181151  147626 status.go:384] host is not running, skipping remaining checks
	I0110 02:12:04.181158  147626 status.go:176] multinode-762808-m03 status: &{Name:multinode-762808-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-762808 node start m03 -v=5 --alsologtostderr: (6.313610792s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (57.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-762808
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-762808
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-762808: (29.418456759s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-762808 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-762808 --wait=true -v=5 --alsologtostderr: (27.622298908s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-762808
--- PASS: TestMultiNode/serial/RestartKeepsNodes (57.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-762808 node delete m03: (4.391599619s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (17.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-762808 stop: (17.412501337s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-762808 status: exit status 7 (94.425475ms)

                                                
                                                
-- stdout --
	multinode-762808
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-762808-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-762808 status --alsologtostderr: exit status 7 (91.860292ms)

                                                
                                                
-- stdout --
	multinode-762808
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-762808-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:13:30.873171  156571 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:13:30.873375  156571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:13:30.873383  156571 out.go:374] Setting ErrFile to fd 2...
	I0110 02:13:30.873387  156571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:13:30.873574  156571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:13:30.873765  156571 out.go:368] Setting JSON to false
	I0110 02:13:30.873789  156571 mustload.go:66] Loading cluster: multinode-762808
	I0110 02:13:30.873899  156571 notify.go:221] Checking for updates...
	I0110 02:13:30.874160  156571 config.go:182] Loaded profile config "multinode-762808": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:13:30.874175  156571 status.go:174] checking status of multinode-762808 ...
	I0110 02:13:30.874780  156571 cli_runner.go:164] Run: docker container inspect multinode-762808 --format={{.State.Status}}
	I0110 02:13:30.893570  156571 status.go:371] multinode-762808 host status = "Stopped" (err=<nil>)
	I0110 02:13:30.893591  156571 status.go:384] host is not running, skipping remaining checks
	I0110 02:13:30.893597  156571 status.go:176] multinode-762808 status: &{Name:multinode-762808 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:13:30.893617  156571 status.go:174] checking status of multinode-762808-m02 ...
	I0110 02:13:30.893873  156571 cli_runner.go:164] Run: docker container inspect multinode-762808-m02 --format={{.State.Status}}
	I0110 02:13:30.911509  156571 status.go:371] multinode-762808-m02 host status = "Stopped" (err=<nil>)
	I0110 02:13:30.911529  156571 status.go:384] host is not running, skipping remaining checks
	I0110 02:13:30.911535  156571 status.go:176] multinode-762808-m02 status: &{Name:multinode-762808-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (17.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-762808 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0110 02:13:32.100740   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-762808 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (44.62456381s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-762808 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-762808
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-762808-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-762808-m02 --driver=docker  --container-runtime=crio: exit status 14 (69.431391ms)

                                                
                                                
-- stdout --
	* [multinode-762808-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-762808-m02' is duplicated with machine name 'multinode-762808-m02' in profile 'multinode-762808'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-762808-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-762808-m03 --driver=docker  --container-runtime=crio: (20.304272656s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-762808
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-762808: exit status 80 (290.246354ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-762808 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-762808-m03 already exists in multinode-762808-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-762808-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-762808-m03: (2.34079359s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.06s)

                                                
                                    
x
+
TestScheduledStopUnix (93.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-930407 --memory=3072 --driver=docker  --container-runtime=crio
E0110 02:14:55.577099   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-930407 --memory=3072 --driver=docker  --container-runtime=crio: (16.758685528s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-930407 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:15:00.125028  166471 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:15:00.125291  166471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:15:00.125302  166471 out.go:374] Setting ErrFile to fd 2...
	I0110 02:15:00.125306  166471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:15:00.125514  166471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:15:00.125739  166471 out.go:368] Setting JSON to false
	I0110 02:15:00.125822  166471 mustload.go:66] Loading cluster: scheduled-stop-930407
	I0110 02:15:00.126126  166471 config.go:182] Loaded profile config "scheduled-stop-930407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:15:00.126201  166471 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/scheduled-stop-930407/config.json ...
	I0110 02:15:00.126371  166471 mustload.go:66] Loading cluster: scheduled-stop-930407
	I0110 02:15:00.126470  166471 config.go:182] Loaded profile config "scheduled-stop-930407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-930407 -n scheduled-stop-930407
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-930407 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:15:00.503074  166621 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:15:00.503175  166621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:15:00.503182  166621 out.go:374] Setting ErrFile to fd 2...
	I0110 02:15:00.503188  166621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:15:00.503386  166621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:15:00.503602  166621 out.go:368] Setting JSON to false
	I0110 02:15:00.503780  166621 daemonize_unix.go:73] killing process 166504 as it is an old scheduled stop
	I0110 02:15:00.503903  166621 mustload.go:66] Loading cluster: scheduled-stop-930407
	I0110 02:15:00.504203  166621 config.go:182] Loaded profile config "scheduled-stop-930407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:15:00.504282  166621 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/scheduled-stop-930407/config.json ...
	I0110 02:15:00.504449  166621 mustload.go:66] Loading cluster: scheduled-stop-930407
	I0110 02:15:00.504537  166621 config.go:182] Loaded profile config "scheduled-stop-930407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0110 02:15:00.509119   14086 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/scheduled-stop-930407/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-930407 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-930407 -n scheduled-stop-930407
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-930407
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-930407 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:15:26.349286  167326 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:15:26.349554  167326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:15:26.349565  167326 out.go:374] Setting ErrFile to fd 2...
	I0110 02:15:26.349572  167326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:15:26.349769  167326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:15:26.350041  167326 out.go:368] Setting JSON to false
	I0110 02:15:26.350131  167326 mustload.go:66] Loading cluster: scheduled-stop-930407
	I0110 02:15:26.350430  167326 config.go:182] Loaded profile config "scheduled-stop-930407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:15:26.350508  167326 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/scheduled-stop-930407/config.json ...
	I0110 02:15:26.350703  167326 mustload.go:66] Loading cluster: scheduled-stop-930407
	I0110 02:15:26.350829  167326 config.go:182] Loaded profile config "scheduled-stop-930407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-930407
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-930407: exit status 7 (77.579096ms)

                                                
                                                
-- stdout --
	scheduled-stop-930407
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-930407 -n scheduled-stop-930407
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-930407 -n scheduled-stop-930407: exit status 7 (77.439137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-930407" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-930407
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-930407: (4.885357256s)
--- PASS: TestScheduledStopUnix (93.10s)

                                                
                                    
x
+
TestInsufficientStorage (8.55s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-051762 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E0110 02:16:18.624872   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-051762 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.161503689s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd424ffd-2604-4963-8836-68b3b7ec1285","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-051762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d22492a0-0b89-40de-a4ab-5710beaa5e0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22414"}}
	{"specversion":"1.0","id":"f6ec1614-e34b-4bde-8ea3-ac012a34c778","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c4f64a4e-98da-44a7-89a1-30e3b1869988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig"}}
	{"specversion":"1.0","id":"154fe555-89c3-42d8-8f80-9c23ac741eb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube"}}
	{"specversion":"1.0","id":"e1d2fe94-1fa2-416a-8f30-e572493fdd21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"10b1c0e6-70e1-4a07-95ea-46f6699482fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fd265b4e-351e-4354-a86a-1db133d90e94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8b694935-4476-46ed-8153-675f331f5d02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"62ba1eb0-6eac-4b2a-93c5-4c62c527dc3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"486b0b26-2171-4882-a648-a4488cb80c3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6e358772-3e35-4879-9f2d-e6c6b89fba92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-051762\" primary control-plane node in \"insufficient-storage-051762\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ab22e15-5a5a-404b-9334-88ac8b981634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1767944074-22401 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c209e98-9d41-4cbd-af2d-bccbb68d31de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"114d9d6a-d549-4e72-8cfa-3046db1b2e3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-051762 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-051762 --output=json --layout=cluster: exit status 7 (279.50241ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-051762","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-051762","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:16:22.834397  169869 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-051762" does not appear in /home/jenkins/minikube-integration/22414-10552/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-051762 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-051762 --output=json --layout=cluster: exit status 7 (273.271703ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-051762","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-051762","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:16:23.109070  169978 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-051762" does not appear in /home/jenkins/minikube-integration/22414-10552/kubeconfig
	E0110 02:16:23.119043  169978 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/insufficient-storage-051762/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-051762" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-051762
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-051762: (1.839753766s)
--- PASS: TestInsufficientStorage (8.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (321.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2195417453 start -p running-upgrade-138757 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2195417453 start -p running-upgrade-138757 --memory=3072 --vm-driver=docker  --container-runtime=crio: (49.17986364s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-138757 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-138757 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.089729165s)
helpers_test.go:176: Cleaning up "running-upgrade-138757" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-138757
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-138757: (2.424260112s)
--- PASS: TestRunningBinaryUpgrade (321.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (310.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-652189 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-652189 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.130888771s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-652189 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-652189 --alsologtostderr: (1.919719192s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-652189 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-652189 status --format={{.Host}}: exit status 7 (76.839836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-652189 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0110 02:19:55.149588   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:19:55.575211   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/addons-600454/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-652189 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.860038177s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-652189 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-652189 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-652189 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (96.357812ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-652189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-652189
	    minikube start -p kubernetes-upgrade-652189 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6521892 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-652189 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-652189 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-652189 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.19227207s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-652189" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-652189
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-652189: (4.433270599s)
--- PASS: TestKubernetesUpgrade (310.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (77.09s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1083522533 start -p missing-upgrade-478501 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1083522533 start -p missing-upgrade-478501 --memory=3072 --driver=docker  --container-runtime=crio: (22.20994051s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-478501
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-478501: (10.453401907s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-478501
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-478501 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-478501 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.299435436s)
helpers_test.go:176: Cleaning up "missing-upgrade-478501" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-478501
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-478501: (1.981092741s)
--- PASS: TestMissingContainerUpgrade (77.09s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (65.99s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-107034 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-107034 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (58.711533472s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-107034 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-107034
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-107034: (6.381118913s)
--- PASS: TestPreload/Start-NoPreload-PullImage (65.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.694257610 start -p stopped-upgrade-116273 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.694257610 start -p stopped-upgrade-116273 --memory=3072 --vm-driver=docker  --container-runtime=crio: (49.496956872s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.694257610 -p stopped-upgrade-116273 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.694257610 -p stopped-upgrade-116273 stop: (2.36556206s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-116273 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-116273 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (12.846769335s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.71s)

                                                
                                    
x
+
TestPause/serial/Start (41.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-538591 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-538591 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.342731978s)
--- PASS: TestPause/serial/Start (41.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-116273
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-116273: (1.263194344s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (47.78s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-107034 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-107034 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.546200716s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-107034 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (47.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-731674 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-731674 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (78.383641ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-731674] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (18.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-731674 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-731674 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (18.496699601s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-731674 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (18.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-731674 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-731674 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.977442184s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-731674 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-731674 status -o json: exit status 2 (313.628016ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-731674","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-731674
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-731674: (2.02365099s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-538591 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-538591 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.643154893s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-731674 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-731674 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.967939847s)
--- PASS: TestNoKubernetes/serial/Start (8.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-647049 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-647049 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (189.005ms)

                                                
                                                
-- stdout --
	* [false-647049] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:18:22.603613  203181 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:18:22.603863  203181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:18:22.603872  203181 out.go:374] Setting ErrFile to fd 2...
	I0110 02:18:22.603877  203181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:18:22.604063  203181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-10552/.minikube/bin
	I0110 02:18:22.604497  203181 out.go:368] Setting JSON to false
	I0110 02:18:22.605522  203181 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3652,"bootTime":1768007851,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 02:18:22.605577  203181 start.go:143] virtualization: kvm guest
	I0110 02:18:22.607562  203181 out.go:179] * [false-647049] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 02:18:22.608935  203181 notify.go:221] Checking for updates...
	I0110 02:18:22.608974  203181 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:18:22.610372  203181 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:18:22.611624  203181 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-10552/kubeconfig
	I0110 02:18:22.613032  203181 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-10552/.minikube
	I0110 02:18:22.614747  203181 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 02:18:22.616142  203181 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:18:22.617868  203181 config.go:182] Loaded profile config "NoKubernetes-731674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0110 02:18:22.618059  203181 config.go:182] Loaded profile config "force-systemd-env-135135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:18:22.618207  203181 config.go:182] Loaded profile config "running-upgrade-138757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 02:18:22.618346  203181 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:18:22.652631  203181 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 02:18:22.652794  203181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:18:22.717896  203181 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2026-01-10 02:18:22.707080177 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 02:18:22.718045  203181 docker.go:319] overlay module found
	I0110 02:18:22.720108  203181 out.go:179] * Using the docker driver based on user configuration
	I0110 02:18:22.721176  203181 start.go:309] selected driver: docker
	I0110 02:18:22.721193  203181 start.go:928] validating driver "docker" against <nil>
	I0110 02:18:22.721207  203181 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:18:22.723254  203181 out.go:203] 
	W0110 02:18:22.724332  203181 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0110 02:18:22.725386  203181 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-647049 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-647049" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jan 2026 02:17:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-138757
contexts:
- context:
cluster: running-upgrade-138757
user: running-upgrade-138757
name: running-upgrade-138757
current-context: ""
kind: Config
users:
- name: running-upgrade-138757
user:
client-certificate: /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/running-upgrade-138757/client.crt
client-key: /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/running-upgrade-138757/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-647049

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647049"

                                                
                                                
----------------------- debugLogs end: false-647049 [took: 4.385320542s] --------------------------------
helpers_test.go:176: Cleaning up "false-647049" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-647049
--- PASS: TestNetworkPlugins/group/false (4.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22414-10552/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-731674 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-731674 "sudo systemctl is-active --quiet service kubelet": exit status 1 (313.587136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-731674
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-731674: (1.251204698s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-731674 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-731674 --driver=docker  --container-runtime=crio: (7.963580642s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-731674 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-731674 "sudo systemctl is-active --quiet service kubelet": exit status 1 (366.275137ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.22s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-382886 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-382886 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (4.032653566s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-382886" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-382886
--- PASS: TestPreload/PreloadSrc/gcs (4.22s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (8s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-148641 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-148641 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (7.820577385s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-148641" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-148641
--- PASS: TestPreload/PreloadSrc/github (8.00s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.41s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-876077 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-876077" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-876077
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (36.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (36.609899773s)
--- PASS: TestNetworkPlugins/group/auto/Start (36.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-647049 "pgrep -a kubelet"
I0110 02:21:03.492014   14086 config.go:182] Loaded profile config "auto-647049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-647049 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-gqkhp" [0cb65c65-c9d2-434b-9763-6c505030c4ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-gqkhp" [0cb65c65-c9d2-434b-9763-6c505030c4ec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.002790666s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-647049 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (39.576687901s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (46.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (46.742593309s)
--- PASS: TestNetworkPlugins/group/calico/Start (46.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (43.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (43.541619857s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (43.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-k54ht" [542e58e2-c02e-4818-8e99-d7daa019b4bf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006711869s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-647049 "pgrep -a kubelet"
I0110 02:22:16.077722   14086 config.go:182] Loaded profile config "kindnet-647049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-647049 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rvt7k" [4f0ba652-0517-4018-bb1d-5174e4b55350] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rvt7k" [4f0ba652-0517-4018-bb1d-5174e4b55350] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003001311s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-647049 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-vt4kn" [34ca0116-e33f-444d-9496-65c42c2160b3] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-vt4kn" [34ca0116-e33f-444d-9496-65c42c2160b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003597487s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-647049 "pgrep -a kubelet"
I0110 02:22:39.585512   14086 config.go:182] Loaded profile config "calico-647049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-647049 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zmgzt" [4a95715d-4689-4808-81c1-6c959385f0af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zmgzt" [4a95715d-4689-4808-81c1-6c959385f0af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003533162s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (59.195475303s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-647049 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-647049 "pgrep -a kubelet"
I0110 02:22:52.015488   14086 config.go:182] Loaded profile config "custom-flannel-647049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-647049 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mbjdc" [ae00e124-7fbb-4960-bd12-4fe7adf23e08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mbjdc" [ae00e124-7fbb-4960-bd12-4fe7adf23e08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003473183s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-647049 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (41.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (41.863749795s)
--- PASS: TestNetworkPlugins/group/flannel/Start (41.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0110 02:23:32.101591   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/functional-224091/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-647049 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m5.406726759s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-647049 "pgrep -a kubelet"
I0110 02:23:45.199179   14086 config.go:182] Loaded profile config "enable-default-cni-647049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-647049 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-sg496" [f4fb070f-326e-409e-949a-83da2ccb9f49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-sg496" [f4fb070f-326e-409e-949a-83da2ccb9f49] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003542392s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-mc6s2" [d3d6b088-c47b-4ef3-b702-2504dac90590] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004073191s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-647049 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-647049 "pgrep -a kubelet"
I0110 02:23:56.410259   14086 config.go:182] Loaded profile config "flannel-647049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-647049 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-ftf5v" [8c59c8dc-5ed7-4d18-b251-c09e59b9abb9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-ftf5v" [8c59c8dc-5ed7-4d18-b251-c09e59b9abb9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003850475s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-647049 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.954882637s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.432799973s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (41.894214965s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-647049 "pgrep -a kubelet"
I0110 02:24:27.843245   14086 config.go:182] Loaded profile config "bridge-647049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-647049 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-pplgf" [f08ed1c7-d4e1-4ffa-b099-727894394d17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-pplgf" [f08ed1c7-d4e1-4ffa-b099-727894394d17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.002717596s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-647049 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-647049 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (38.671701686s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-188604 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [70c42a4d-ef36-441a-9154-7c8a868b9828] Pending
helpers_test.go:353: "busybox" [70c42a4d-ef36-441a-9154-7c8a868b9828] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [70c42a4d-ef36-441a-9154-7c8a868b9828] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.007547096s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-188604 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-872415 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bd50d3b2-8ab9-4ef9-9105-c46448470074] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [bd50d3b2-8ab9-4ef9-9105-c46448470074] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004053986s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-872415 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-190877 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [db660e0d-265f-4939-9a77-c311c0ded30d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [db660e0d-265f-4939-9a77-c311c0ded30d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.002906414s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-190877 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-188604 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-188604 --alsologtostderr -v=3: (16.068918324s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-872415 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-872415 --alsologtostderr -v=3: (18.147051199s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-190877 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-190877 --alsologtostderr -v=3: (18.193167628s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188604 -n old-k8s-version-188604
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188604 -n old-k8s-version-188604: exit status 7 (74.068032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-188604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-188604 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.886534525s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-188604 -n old-k8s-version-188604
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872415 -n embed-certs-872415
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872415 -n embed-certs-872415: exit status 7 (82.839755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-872415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-872415 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (48.368424897s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872415 -n embed-certs-872415
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-313784 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2381c602-8214-4872-a765-3ac283fb99a2] Pending
helpers_test.go:353: "busybox" [2381c602-8214-4872-a765-3ac283fb99a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2381c602-8214-4872-a765-3ac283fb99a2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005084294s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-313784 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190877 -n no-preload-190877
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190877 -n no-preload-190877: exit status 7 (100.047206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-190877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-190877 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (49.817269179s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190877 -n no-preload-190877
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-313784 --alsologtostderr -v=3
E0110 02:26:03.685768   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:03.691018   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:03.701250   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:03.721506   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:03.761794   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:03.842078   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:04.002491   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:04.323063   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:04.964249   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:06.245079   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-313784 --alsologtostderr -v=3: (17.752564033s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784: exit status 7 (77.641174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-313784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 02:26:08.805468   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:26:13.925610   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-313784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (43.178032652s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313784 -n default-k8s-diff-port-313784
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-lq5lf" [44e589a7-3475-4e98-95fc-c5f990e17892] Running
E0110 02:26:24.166218   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/auto-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003901641s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-lq5lf" [44e589a7-3475-4e98-95fc-c5f990e17892] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004014528s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-188604 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jwghz" [5432265c-0d37-4587-b87f-074f8b58198b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002473901s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-188604 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-tc5gq" [93bbb9ab-588f-41cc-9e34-6b3bfe4ba79e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002991032s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jwghz" [5432265c-0d37-4587-b87f-074f8b58198b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003253704s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-872415 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-872415 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-tc5gq" [93bbb9ab-588f-41cc-9e34-6b3bfe4ba79e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003133852s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-190877 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (24.769977479s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-190877 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cvzmq" [cbc4d840-9be8-4904-9cd0-a35b6a2c6149] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00361562s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cvzmq" [cbc4d840-9be8-4904-9cd0-a35b6a2c6149] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005019189s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-313784 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-313784 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-843779 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-843779 --alsologtostderr -v=3: (7.958194193s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843779 -n newest-cni-843779
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843779 -n newest-cni-843779: exit status 7 (74.240349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-843779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 02:27:14.875551   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/kindnet-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:27:19.995727   14086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/kindnet-647049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-843779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (10.898651967s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843779 -n newest-cni-843779
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-843779 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-647049 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-647049" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jan 2026 02:17:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-138757
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jan 2026 02:17:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: test-preload-107034
contexts:
- context:
cluster: running-upgrade-138757
user: running-upgrade-138757
name: running-upgrade-138757
- context:
cluster: test-preload-107034
extensions:
- extension:
last-update: Sat, 10 Jan 2026 02:17:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: test-preload-107034
name: test-preload-107034
current-context: ""
kind: Config
users:
- name: running-upgrade-138757
user:
client-certificate: /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/running-upgrade-138757/client.crt
client-key: /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/running-upgrade-138757/client.key
- name: test-preload-107034
user:
client-certificate: /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/test-preload-107034/client.crt
client-key: /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/test-preload-107034/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-647049

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647049"

                                                
                                                
----------------------- debugLogs end: kubenet-647049 [took: 3.264853863s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-647049" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-647049
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-647049 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-647049" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22414-10552/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jan 2026 02:17:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-138757
contexts:
- context:
cluster: running-upgrade-138757
user: running-upgrade-138757
name: running-upgrade-138757
current-context: ""
kind: Config
users:
- name: running-upgrade-138757
user:
client-certificate: /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/running-upgrade-138757/client.crt
client-key: /home/jenkins/minikube-integration/22414-10552/.minikube/profiles/running-upgrade-138757/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-647049

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-647049" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647049"

                                                
                                                
----------------------- debugLogs end: cilium-647049 [took: 3.535621861s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-647049" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-647049
--- SKIP: TestNetworkPlugins/group/cilium (3.71s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-249405" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-249405
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard